Testing the Effectiveness of a Career Education Program for Potential Dropouts.
ERIC Educational Resources Information Center
Schmidt, Wesley I.; Dykeman, Bruce F.
1979-01-01
Using a randomized two group pre-test/post-test research design, the study found that (1) significant results occurred on those variables measuring levels of vocational maturity, days of school attendance, and numbers of disciplinary referrals and (2) non-significant results occurred on those variables measuring proportions of students dropping…
ERIC Educational Resources Information Center
De Boeck, Paul
2008-01-01
It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…
Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.
Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen
2013-02-01
In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.
Reward and uncertainty in exploration programs
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1971-01-01
A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.
Effect of randomness in logistic maps
NASA Astrophysics Data System (ADS)
Khaleque, Abdul; Sen, Parongama
2015-01-01
We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However
Johnson, S M; Christensen, A; Bellamy, G T
1976-01-01
Five children referred to a child-family intervention program wore a radio transmitter in the home during pre-intervention and termination assessments. The transmitter broadcast to a receiver-recording apparatus in the home (either activated by an interval timer at predetermined "random" times or by parents at predetermined "picked" times). "Picked" times were parent-selected situations during which problems typically occurred (e.g., bedtime). Parents activated the recorder regularly whether or not problems occurred. Child-deviant, parent-negative, and parent-commanding behaviors were significantly higher at the picked times during pretest than at random times. At posttest, behaviors in all three classes were substantially reduced at picked times, but not at random times. For individual subject data, reductions occurred in at least two of the three dependent variables for three of the five cases during random time assessments. In general, the behavioral outcome data corresponded to parent-attitude reports and parent-collected observation data.
A Cautious Note on Auxiliary Variables That Can Increase Bias in Missing Data Problems.
Thoemmes, Felix; Rose, Norman
2014-01-01
The treatment of missing data in the social sciences has changed tremendously during the last decade. Modern missing data techniques such as multiple imputation and full-information maximum likelihood are used much more frequently. These methods assume that data are missing at random. One very common approach to increase the likelihood that missing at random is achieved consists of including many covariates as so-called auxiliary variables. These variables are either included based on data considerations or in an inclusive fashion; that is, taking all available auxiliary variables. In this article, we point out that there are some instances in which auxiliary variables exhibit the surprising property of increasing bias in missing data problems. In a series of focused simulation studies, we highlight some situations in which this type of biasing behavior can occur. We briefly discuss possible ways how one can avoid selecting bias-inducing covariates as auxiliary variables.
Benford's law and continuous dependent random variables
NASA Astrophysics Data System (ADS)
Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine
2018-01-01
Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Randomized trial of intermittent or continuous amnioinfusion for variable decelerations.
Rinehart, B K; Terrone, D A; Barrow, J H; Isler, C M; Barrilleaux, P S; Roberts, W E
2000-10-01
To determine whether continuous or intermittent bolus amnioinfusion is more effective in relieving variable decelerations. Patients with repetitive variable decelerations were randomized to an intermittent bolus or continuous amnioinfusion. The intermittent bolus infusion group received boluses of 500 mL of normal saline, each over 30 minutes, with boluses repeated if variable decelerations recurred. The continuous infusion group received a bolus infusion of 500 mL of normal saline over 30 minutes and then 3 mL per minute until delivery occurred. The ability of the amnioinfusion to abolish variable decelerations was analyzed, as were maternal demographic and pregnancy outcome variables. Power analysis indicated that 64 patients would be required. Thirty-five patients were randomized to intermittent infusion and 30 to continuous infusion. There were no differences between groups in terms of maternal demographics, gestational age, delivery mode, neonatal outcome, median time to resolution of variable decelerations, or the number of times variable decelerations recurred. The median volume infused in the intermittent infusion group (500 mL) was significantly less than that in the continuous infusion group (905 mL, P =.003). Intermittent bolus amnioinfusion is as effective as continuous infusion in relieving variable decelerations in labor. Further investigation is necessary to determine whether either of these techniques is associated with increased occurrence of rare complications such as cord prolapse or uterine rupture.
ERIC Educational Resources Information Center
Abry, Tashia D. S.; Rimm-Kaufman, Sara E.; Larsen, Ross A.; Brewer, Alix J.
2011-01-01
The present study examines data collected during the second year of a three-year longitudinal cluster randomized controlled trial, the Responsive Classroom Efficacy Study (RCES). In the context of and RCT, the research questions address naturally occurring variability in the independent variables of interest (i.e., teachers' (fidelity of…
1987-09-01
inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in
Molas, Marek; Lesaffre, Emmanuel
2008-12-30
Discrete bounded outcome scores (BOS), i.e. discrete measurements that are restricted on a finite interval, often occur in practice. Examples are compliance measures, quality of life measures, etc. In this paper we examine three related random effects approaches to analyze longitudinal studies with a BOS as response: (1) a linear mixed effects (LM) model applied to a logistic transformed modified BOS; (2) a model assuming that the discrete BOS is a coarsened version of a latent random variable, which after a logistic-normal transformation, satisfies an LM model; and (3) a random effects probit model. We consider also the extension whereby the variability of the BOS is allowed to depend on covariates. The methods are contrasted using a simulation study and on a longitudinal project, which documents stroke rehabilitation in four European countries using measures of motor and functional recovery. Copyright 2008 John Wiley & Sons, Ltd.
Zheng, Lianqing; Chen, Mengen; Yang, Wei
2009-06-21
To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.
The Newcomb-Benford law in its relation to some common distributions.
Formann, Anton K
2010-05-07
An often reported, but nevertheless persistently striking observation, formalized as the Newcomb-Benford law (NBL), is that the frequencies with which the leading digits of numbers occur in a large variety of data are far away from being uniform. Most spectacular seems to be the fact that in many data the leading digit 1 occurs in nearly one third of all cases. Explanations for this uneven distribution of the leading digits were, among others, scale- and base-invariance. Little attention, however, found the interrelation between the distribution of the significant digits and the distribution of the observed variable. It is shown here by simulation that long right-tailed distributions of a random variable are compatible with the NBL, and that for distributions of the ratio of two random variables the fit generally improves. Distributions not putting most mass on small values of the random variable (e.g. symmetric distributions) fail to fit. Hence, the validity of the NBL needs the predominance of small values and, when thinking of real-world data, a majority of small entities. Analyses of data on stock prices, the areas and numbers of inhabitants of countries, and the starting page numbers of papers from a bibliography sustain this conclusion. In all, these findings may help to understand the mechanisms behind the NBL and the conditions needed for its validity. That this law is not only of scientific interest per se, but that, in addition, it has also substantial implications can be seen from those fields where it was suggested to be put into practice. These fields reach from the detection of irregularities in data (e.g. economic fraud) to optimizing the architecture of computers regarding number representation, storage, and round-off errors.
Do Conditional Reinforcers Count?
ERIC Educational Resources Information Center
Davison, Michael; Baum, William M.
2006-01-01
Six pigeons were trained on a procedure in which seven components arranged different food-delivery ratios on concurrent variable-interval schedules each session. The components were unsignaled, lasted for 10 food deliveries, and occurred in random order with a 60-s blackout between components. The schedules were arranged using a switching-key…
Quantifying Intrinsic and Extrinsic Variability in Stochastic Gene Expression Models
Singh, Abhyudai; Soltani, Mohammad
2013-01-01
Genetically identical cell populations exhibit considerable intercellular variation in the level of a given protein or mRNA. Both intrinsic and extrinsic sources of noise drive this variability in gene expression. More specifically, extrinsic noise is the expression variability that arises from cell-to-cell differences in cell-specific factors such as enzyme levels, cell size and cell cycle stage. In contrast, intrinsic noise is the expression variability that is not accounted for by extrinsic noise, and typically arises from the inherent stochastic nature of biochemical processes. Two-color reporter experiments are employed to decompose expression variability into its intrinsic and extrinsic noise components. Analytical formulas for intrinsic and extrinsic noise are derived for a class of stochastic gene expression models, where variations in cell-specific factors cause fluctuations in model parameters, in particular, transcription and/or translation rate fluctuations. Assuming mRNA production occurs in random bursts, transcription rate is represented by either the burst frequency (how often the bursts occur) or the burst size (number of mRNAs produced in each burst). Our analysis shows that fluctuations in the transcription burst frequency enhance extrinsic noise but do not affect the intrinsic noise. On the contrary, fluctuations in the transcription burst size or mRNA translation rate dramatically increase both intrinsic and extrinsic noise components. Interestingly, simultaneous fluctuations in transcription and translation rates arising from randomness in ATP abundance can decrease intrinsic noise measured in a two-color reporter assay. Finally, we discuss how these formulas can be combined with single-cell gene expression data from two-color reporter experiments for estimating model parameters. PMID:24391934
Quantifying intrinsic and extrinsic variability in stochastic gene expression models.
Singh, Abhyudai; Soltani, Mohammad
2013-01-01
Genetically identical cell populations exhibit considerable intercellular variation in the level of a given protein or mRNA. Both intrinsic and extrinsic sources of noise drive this variability in gene expression. More specifically, extrinsic noise is the expression variability that arises from cell-to-cell differences in cell-specific factors such as enzyme levels, cell size and cell cycle stage. In contrast, intrinsic noise is the expression variability that is not accounted for by extrinsic noise, and typically arises from the inherent stochastic nature of biochemical processes. Two-color reporter experiments are employed to decompose expression variability into its intrinsic and extrinsic noise components. Analytical formulas for intrinsic and extrinsic noise are derived for a class of stochastic gene expression models, where variations in cell-specific factors cause fluctuations in model parameters, in particular, transcription and/or translation rate fluctuations. Assuming mRNA production occurs in random bursts, transcription rate is represented by either the burst frequency (how often the bursts occur) or the burst size (number of mRNAs produced in each burst). Our analysis shows that fluctuations in the transcription burst frequency enhance extrinsic noise but do not affect the intrinsic noise. On the contrary, fluctuations in the transcription burst size or mRNA translation rate dramatically increase both intrinsic and extrinsic noise components. Interestingly, simultaneous fluctuations in transcription and translation rates arising from randomness in ATP abundance can decrease intrinsic noise measured in a two-color reporter assay. Finally, we discuss how these formulas can be combined with single-cell gene expression data from two-color reporter experiments for estimating model parameters.
Suicide in the Army National Guard: An Empirical Inquiry
ERIC Educational Resources Information Center
Griffith, James
2012-01-01
Since 2004, suicides in the U.S. military have risen, most notably in the Army National Guard (ARNG). Data used in this study were obtained for suicides occurring from 2007 to 2010 and for a random sample of nonsuicides from the general ARNG population. Of the military-related variables considered, a few showed relationships to suicide. Rather,…
Dissociating Temporal Preparation Processes as a Function of the Inter-Trial Interval Duration
ERIC Educational Resources Information Center
Vallesi, Antonino; Lozano, Violeta N.; Correa, Angel
2013-01-01
Preparation over time is a ubiquitous capacity which implies decreasing uncertainty about when critical events will occur. This capacity is usually studied with the variable foreperiod paradigm, which consists in the random variation of the time interval (foreperiod) between a warning stimulus and a target. With this paradigm, response time (RT)…
Natale, F; La Penna, C; Padoa, A; Agostini, M; De Simone, E; Cervigni, M
2009-01-01
We compared safety and efficacy of Gynemesh PS and Pelvicol for recurrent cystocele repair. One hundred ninety patients were randomly divided into Gynemesh PS and Pelvicol groups and underwent tension-free cystocele repair. The Chi-square test was used to compare categorical variables, the paired t test for continuous parametric variables, and the Mann-Whitney test for continuous nonparametric variables. Ninety-six Gynemesh PS patients and 94 Pelvicol patients were studied. Mesh erosions occurred in 6.3% of Gynemesh PS patients. No erosions were observed in Pelvicol patients (p = 0.02). Objective cure was 71.9% for Gynemesh PS and 56.4% for Pelvicol (p = 0.06). Subjective cure was the same in both groups except for better sexuality in the Pelvicol group. At 24 months follow-up, only Gynemesh PS patients had mesh erosions. Anatomical outcome was similar in the two groups. Pelvicol gave a better impact on voiding and sexuality.
Trombetti, Andrea; Hars, Mélany; Herrmann, François R; Kressig, Reto W; Ferrari, Serge; Rizzoli, René
2011-03-28
Falls occur mainly while walking or performing concurrent tasks. We determined whether a music-based multitask exercise program improves gait and balance and reduces fall risk in elderly individuals. We conducted a 12-month randomized controlled trial involving 134 community-dwelling individuals older than 65 years, who are at increased risk of falling. They were randomly assigned to an intervention group (n = 66) or a delayed intervention control group scheduled to start the program 6 months later (n = 68). The intervention was a 6-month multitask exercise program performed to the rhythm of piano music. Change in gait variability under dual-task condition from baseline to 6 months was the primary end point. Secondary outcomes included changes in balance, functional performances, and fall risk. At 6 months, there was a reduction in stride length variability (adjusted mean difference, -1.4%; P < .002) under dual-task condition in the intervention group, compared with the delayed intervention control group. Balance and functional tests improved compared with the control group. There were fewer falls in the intervention group (incidence rate ratio, 0.46; 95% confidence interval, 0.27-0.79) and a lower risk of falling (relative risk, 0.61; 95% confidence interval, 0.39-0.96). Similar changes occurred in the delayed intervention control group during the second 6-month period with intervention. The benefit of the intervention on gait variability persisted 6 months later. In community-dwelling older people at increased risk of falling, a 6-month music-based multitask exercise program improved gait under dual-task condition, improved balance, and reduced both the rate of falls and the risk of falling. Trial Registration clinicaltrials.gov Identifier: NCT01107288.
Allelic variability in species and stocks of Lake Superior ciscoes (Coregoninae)
Todd, Thomas N.
1981-01-01
Starch gel electrophoresis was used as a means of recognizing species and stocks in Lake Superior Coregonus. Allelic variability at isocitrate dehydrogenase and glycerol-3-phosphate dehydrogenase loci was recorded for samples of lake herring (Coregonus artedii), bloater (C. hoyi), kiyi (C. kiyi), and shortjaw cisco (C. zenithicus) from five Lake Superior localities. The observed frequencies of genotypes within each subsample did not differ significantly from those expected on the basis of random mating, and suggested that each subsample represented either a random sample from a larger randomly mating population or an independent and isolated subpopulation within which mating was random. Significant contingency X2 values for comparisons between both localities and species suggested that more than one randomly mating population occurred among the Lake Superior ciscoes, but did not reveal how many such populations there were. In contrast to the genetic results of this study, morphology seems to be a better descriptor of cisco stocks, and identification of cisco stocks and species will still have to be based on morphological criteria until more data are forthcoming. Where several species are sympatric, management should strive to preserve the least abundant. Failure to do so could result in the extinction or depletion of the rarer forms.
NASA Astrophysics Data System (ADS)
Molinario, G.; Hansen, M.; Potapov, P.
2016-12-01
High resolution satellite imagery obtained from the National Geospatial Intelligence Agency through NASA was used to photo-interpret sample areas within the DRC. The area sampled is a stratifcation of the forest cover loss from circa 2014 that either occurred completely within the previosly mapped homogenous area of the Rural Complex, at it's interface with primary forest, or in isolated forest perforations. Previous research resulted in a map of these areas that contextualizes forest loss depending on where it occurs and with what spatial density, leading to a better understading of the real impacts on forest degradation of livelihood shifting cultivation. The stratified random sampling approach of these areas allows the characterization of the constituent land cover types within these areas, and their variability throughout the DRC. Shifting cultivation has a variable forest degradation footprint in the DRC depending on many factors that drive it, but it's role in forest degradation and deforestation had been disputed, leading us to investigate and quantify the clearing and reuse rates within the strata throughout the country.
Cardiorespiratory interactions to external stimuli.
Bernardi, L; Porta, C; Spicuzza, L; Sleight, P
2005-09-01
Respiration is a powerful modulator of heart rate variability, and of baro- or chemo-reflex sensitivity. This occurs via a mechanical effect of breathing that synchronizes all cardiovascular variables at the respiratory rhythm, particularly when this occurs at a particular slow rate coincident with the Mayer waves in arterial pressure (approximately 6 cycles/min). Recitation of the rosary prayer (or of most mantras), induces a marked enhancement of these slow rhythms, whereas random verbalization or random breathing does not. This phenomenon in turn increases baroreflex sensitivity and reduces chemoreflex sensitivity, leading to increases in parasympathetic and reductions in sympathetic activity. The opposite can be seen during either verbalization or mental stress tests. Qualitatively similar effects can be obtained even by passive listening to more or less rhythmic auditory stimuli, such as music, and the speed of the rhythm (rather than the style) appears to be one of the main determinants of the cardiovascular and respiratory responses. These findings have clinical relevance. Appropriate modulation of breathing, can improve/restore autonomic control of cardiovascular and respiratory systems in relevant diseases such as hypertension and heart failure, and might therefore help improving exercise tolerance, quality of life, and ultimately, survival.
Generating variable and random schedules of reinforcement using Microsoft Excel macros.
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Packet Randomized Experiments for Eliminating Classes of Confounders
Pavela, Greg; Wiener, Howard; Fontaine, Kevin R.; Fields, David A.; Voss, Jameson D.; Allison, David B.
2014-01-01
Background Although randomization is considered essential for causal inference, it is often not possible to randomize in nutrition and obesity research. To address this, we develop a framework for an experimental design—packet randomized experiments (PREs), which improves causal inferences when randomization on a single treatment variable is not possible. This situation arises when subjects are randomly assigned to a condition (such as a new roommate) which varies in one characteristic of interest (such as weight), but also varies across many others. There has been no general discussion of this experimental design, including its strengths, limitations, and statistical properties. As such, researchers are left to develop and apply PREs on an ad hoc basis, limiting its potential to improve causal inferences among nutrition and obesity researchers. Methods We introduce PREs as an intermediary design between randomized controlled trials and observational studies. We review previous research that used the PRE design and describe its application in obesity-related research, including random roommate assignments, heterochronic parabiosis, and the quasi-random assignment of subjects to geographic areas. We then provide a statistical framework to control for potential packet-level confounders not accounted for by randomization. Results PREs have successfully been used to improve causal estimates of the effect of roommates, altitude, and breastfeeding on weight outcomes. When certain assumptions are met, PREs can asymptotically control for packet-level characteristics. This has the potential to statistically estimate the effect of a single treatment even when randomization to a single treatment did not occur. Conclusions Applying PREs to obesity-related research will improve decisions about clinical, public health, and policy actions insofar as it offers researchers new insight into cause and effect relationships among variables. PMID:25444088
Effects of magnitude, depth, and time on cellular seismology forecasts
NASA Astrophysics Data System (ADS)
Fisher, Steven Wolf
This study finds that, in most cases analyzed to date, past seismicity tends to delineate zones where future earthquakes are likely to occur. Network seismicity catalogs for the New Madrid Seismic Zone (NMSZ), Australia (AUS), California (CA), and Alaska (AK) are analyzed using modified versions of the Cellular Seismology (CS) method of Kafka (2002, 2007). The percentage of later occurring earthquakes located near earlier occurring earthquakes typically exceeds the expected percentage for randomly distributed later occurring earthquakes, and the specific percentage is influenced by several variables, including magnitude, depth, time, and tectonic setting. At 33% map area coverage, hit percents are typically 85-95% in the NMSZ, 50-60% in AUS, 75-85% in CA, and 75-85% in AK. Statistical significance testing is performed on trials analyzing the same variables so that the overall regions can be compared, although some tests are inconclusive due to the small number of earthquake sample sizes. These results offer useful insights into understanding the capabilities and limits of CS studies, which can provide guidance for improving the seismicity-based components of seismic hazard assessments.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Predicting reduced visibility related crashes on freeways using real-time traffic flow data.
Hassan, Hany M; Abdel-Aty, Mohamed A
2013-06-01
The main objective of this paper is to investigate whether real-time traffic flow data, collected from loop detectors and radar sensors on freeways, can be used to predict crashes occurring at reduced visibility conditions. In addition, it examines the difference between significant factors associated with reduced visibility related crashes to those factors correlated with crashes occurring at clear visibility conditions. Random Forests and matched case-control logistic regression models were estimated. The findings indicated that real-time traffic variables can be used to predict visibility related crashes on freeways. The results showed that about 69% of reduced visibility related crashes were correctly identified. The results also indicated that traffic flow variables leading to visibility related crashes are slightly different from those variables leading to clear visibility crashes. Using time slices 5-15 minutes before crashes might provide an opportunity for the appropriate traffic management centers for a proactive intervention to reduce crash risk in real-time. Copyright © 2013 Elsevier Ltd. All rights reserved.
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286
Naorem, Santa S.; Han, Jin; Wang, Shufang; Lee, William R.; Heng, Xiao; Miller, Jeff F.
2017-01-01
Diversity-generating retroelements (DGRs) are molecular evolution machines that facilitate microbial adaptation to environmental changes. Hypervariation occurs via a mutagenic retrotransposition process from a template repeat (TR) to a variable repeat (VR) that results in adenine-to-random nucleotide conversions. Here we show that reverse transcription of the Bordetella phage DGR is primed by an adenine residue in TR RNA and is dependent on the DGR-encoded reverse transcriptase (bRT) and accessory variability determinant (Avd ), but is VR-independent. We also find that the catalytic center of bRT plays an essential role in site-specific cleavage of TR RNA for cDNA priming. Adenine-specific mutagenesis occurs during reverse transcription and does not involve dUTP incorporation, indicating it results from bRT-catalyzed misincorporation of standard deoxyribonucleotides. In vivo assays show that this hybrid RNA-cDNA molecule is required for mutagenic transposition, revealing a unique mechanism of DNA hypervariation for microbial adaptation. PMID:29109248
NASA Astrophysics Data System (ADS)
Chorozoglou, D.; Kugiumtzis, D.; Papadimitriou, E.
2018-06-01
The seismic hazard assessment in the area of Greece is attempted by studying the earthquake network structure, such as small-world and random. In this network, a node represents a seismic zone in the study area and a connection between two nodes is given by the correlation of the seismic activity of two zones. To investigate the network structure, and particularly the small-world property, the earthquake correlation network is compared with randomized ones. Simulations on multivariate time series of different length and number of variables show that for the construction of randomized networks the method randomizing the time series performs better than methods randomizing directly the original network connections. Based on the appropriate randomization method, the network approach is applied to time series of earthquakes that occurred between main shocks in the territory of Greece spanning the period 1999-2015. The characterization of networks on sliding time windows revealed that small-world structure emerges in the last time interval, shortly before the main shock.
Human, Theresa; Diringer, Michael N; Allen, Michelle; Zipfel, Gregory J; Chicoine, Michael; Dacey, Ralph; Dhar, Rajat
2018-04-01
Seizures occur in 10-20% of patients with subarachnoid hemorrhage (SAH), predominantly in the acute phase. However, anticonvulsant prophylaxis remains controversial, with studies suggesting a brief course may be adequate and longer exposure may be associated with worse outcomes. Nonetheless, in the absence of controlled trials to inform practice, patients continue to receive variable chemoprophylaxis. The objective of this study was to compare brief versus extended seizure prophylaxis after aneurysmal SAH. We performed a prospective, single-center, randomized, open-label trial of a brief (3-day) course of levetiracetam (LEV) versus extended treatment (until hospital discharge). The primary outcome was in-hospital seizure. Secondary outcomes included drug discontinuation and functional outcome. Eighty-four SAH patients had been randomized when the trial was terminated due to slow enrollment. In-hospital seizures occurred in three (9%) of 35 in the brief LEV group versus one (2%) of 49 in the extended group (p = 0.2). Ten (20%) of the extended group discontinued LEV prematurely, primarily due to sedation. Four of five seizures (including one pre-randomization) occurred in patients with early brain injury (EBI) on computed tomography (CT) scans (adjusted OR 12.5, 95% CI 1.2-122, p = 0.03). Good functional outcome (mRS 0-2) was more likely in the brief LEV group (83 vs. 61%, p = 0.04). This study was underpowered to demonstrate superiority of extended LEV for seizure prophylaxis, although a trend to benefit was seen. Seizures primarily occurred in those with radiographic EBI, suggesting targeted prophylaxis may be preferable. Larger trials are required to evaluate optimal chemoprophylaxis in SAH, especially in light of worse outcomes in those receiving extended treatment.
Modelling and control algorithms of the cross conveyors line with multiengine variable speed drives
NASA Astrophysics Data System (ADS)
Cheremushkina, M. S.; Baburin, S. V.
2017-02-01
The paper deals with the actual problem of developing the control algorithm that meets the technical requirements of the mine belt conveyors, and enables energy and resource savings taking into account a random sort of traffic. The most effective method of solution of these tasks is the construction of control systems with the use of variable speed drives for asynchronous motors. The authors designed the mathematical model of the system ‘variable speed multiengine drive - conveyor - control system of conveyors’ that takes into account the dynamic processes occurring in the elements of the transport system, provides an assessment of the energy efficiency of application the developed algorithms, which allows one to reduce the dynamic overload in the belt to 15-20%.
Gu, Wen; Reddy, Hima B; Green, Debbie; Belfi, Brian; Einzig, Shanah
2017-01-01
Criminal forensic evaluations are complicated by the risk that examinees will respond in an unreliable manner. Unreliable responding could occur due to lack of personal investment in the evaluation, severe mental illness, and low cognitive abilities. In this study, 31% of Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008/2011) profiles were invalid due to random or fixed-responding (T score ≥ 80 on the VRIN-r or TRIN-r scales) in a sample of pretrial criminal defendants evaluated in the context of treatment for competency restoration. Hierarchical regression models showed that symptom exaggeration variables, as measured by inconsistently reported psychiatric symptoms, contributed over and above education and intellectual functioning in their prediction of both random responding and fixed responding. Psychopathology variables, as measured by mood disturbance, better predicted fixed responding after controlling for estimates of cognitive abilities, but did not improve the prediction for random responding. These findings suggest that random responding and fixed responding are not only affected by education and intellectual functioning, but also by intentional exaggeration and aspects of psychopathology. Measures of intellectual functioning and effort and response style should be considered for administration in conjunction with self-report personality measures to rule out rival hypotheses of invalid profiles.
Resolving the Conflict Between Associative Overdominance and Background Selection
Zhao, Lei; Charlesworth, Brian
2016-01-01
In small populations, genetic linkage between a polymorphic neutral locus and loci subject to selection, either against partially recessive mutations or in favor of heterozygotes, may result in an apparent selective advantage to heterozygotes at the neutral locus (associative overdominance) and a retardation of the rate of loss of variability by genetic drift at this locus. In large populations, selection against deleterious mutations has previously been shown to reduce variability at linked neutral loci (background selection). We describe analytical, numerical, and simulation studies that shed light on the conditions under which retardation vs. acceleration of loss of variability occurs at a neutral locus linked to a locus under selection. We consider a finite, randomly mating population initiated from an infinite population in equilibrium at a locus under selection. With mutation and selection, retardation occurs only when S, the product of twice the effective population size and the selection coefficient, is of order 1. With S >> 1, background selection always causes an acceleration of loss of variability. Apparent heterozygote advantage at the neutral locus is, however, always observed when mutations are partially recessive, even if there is an accelerated rate of loss of variability. With heterozygote advantage at the selected locus, loss of variability is nearly always retarded. The results shed light on experiments on the loss of variability at marker loci in laboratory populations and on the results of computer simulations of the effects of multiple selected loci on neutral variability. PMID:27182952
Non-equilibrium Green's functions study of discrete dopants variability on an ultra-scaled FinFET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valin, R., E-mail: r.valinferreiro@swansea.ac.uk; Martinez, A., E-mail: a.e.Martinez@swansea.ac.uk; Barker, J. R., E-mail: john.barker@glasgow.ac.uk
In this paper, we study the effect of random discrete dopants on the performance of a 6.6 nm channel length silicon FinFET. The discrete dopants have been distributed randomly in the source/drain region of the device. Due to the small dimensions of the FinFET, a quantum transport formalism based on the non-equilibrium Green's functions has been deployed. The transfer characteristics for several devices that differ in location and number of dopants have been calculated. Our results demonstrate that discrete dopants modify the effective channel length and the height of the source/drain barrier, consequently changing the channel control of the charge. Thismore » effect becomes more significant at high drain bias. As a consequence, there is a strong effect on the variability of the on-current, off-current, sub-threshold slope, and threshold voltage. Finally, we have also calculated the mean and standard deviation of these parameters to quantify their variability. The obtained results show that the variability at high drain bias is 1.75 larger than at low drain bias. However, the variability of the on-current, off-current, and sub-threshold slope remains independent of the drain bias. In addition, we have found that a large source to drain current by tunnelling current occurs at low gate bias.« less
Non-linear resonant coupling of tsunami edge waves using stochastic earthquake source models
Geist, Eric L.
2016-01-01
Non-linear resonant coupling of edge waves can occur with tsunamis generated by large-magnitude subduction zone earthquakes. Earthquake rupture zones that straddle beneath the coastline of continental margins are particularly efficient at generating tsunami edge waves. Using a stochastic model for earthquake slip, it is shown that a wide range of edge-wave modes and wavenumbers can be excited, depending on the variability of slip. If two modes are present that satisfy resonance conditions, then a third mode can gradually increase in amplitude over time, even if the earthquake did not originally excite that edge-wave mode. These three edge waves form a resonant triad that can cause unexpected variations in tsunami amplitude long after the first arrival. An M ∼ 9, 1100 km-long continental subduction zone earthquake is considered as a test case. For the least-variable slip examined involving a Gaussian random variable, the dominant resonant triad includes a high-amplitude fundamental mode wave with wavenumber associated with the along-strike dimension of rupture. The two other waves that make up this triad include subharmonic waves, one of fundamental mode and the other of mode 2 or 3. For the most variable slip examined involving a Cauchy-distributed random variable, the dominant triads involve higher wavenumbers and modes because subevents, rather than the overall rupture dimension, control the excitation of edge waves. Calculation of the resonant period for energy transfer determines which cases resonant coupling may be instrumentally observed. For low-mode triads, the maximum transfer of energy occurs approximately 20–30 wave periods after the first arrival and thus may be observed prior to the tsunami coda being completely attenuated. Therefore, under certain circumstances the necessary ingredients for resonant coupling of tsunami edge waves exist, indicating that resonant triads may be observable and implicated in late, large-amplitude tsunami arrivals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonysamy, A.A., E-mail: alphons.antonysamy@GKNAerospace.com; Meyer, J., E-mail: jonathan.meyer@eads.com; Prangnell, P.B., E-mail: philip.prangnell@manchester.ac.uk
With titanium alloys, the solidification conditions in Additive Manufacturing (AM) frequently lead to coarse columnar β-grain structures. The effect of geometry on the variability in the grain structure and texture, seen in Ti-6Al-4V alloy components produced by Selective Electron Beam Melting (SEBM), has been investigated. Reconstruction of the primary β-phase, from α-phase EBSD data, has confirmed that in bulk sections where in-fill “hatching” is employed growth selection favours columnar grains aligned with an <001> {sub β} direction normal to the deposited powder layers; this results in a coarse β-grain structure with a strong < 001 > {sub β} fibre texturemore » (up 8 x random) that can oscillate between a near random distribution around the fibre axis and cube reinforcement with build height. It is proposed that this behaviour is related to the highly elongated melt pool and the raster directions alternating between two orthogonal directions every layer, which on average favours grains with cube alignment. In contrast, the outline, or “contour”, pass produces a distinctly different grain structure and texture resulting in a skin layer on wall surfaces, where nucleation occurs off the surrounding powder and growth follows the curved surface of the melt pool. This structure becomes increasingly important in thin sections. Local heterogeneities have also been found within different section transitions, resulting from the growth of skin grain structures into thicker sections. Texture simulations have shown that the far weaker α-texture (∼ 3 x random), seen in the final product, arises from transformation on cooling occurring with a near random distribution of α-plates across the 12 variants possible from the Burgers relationship. - Highlights: • Distinctly different skin and bulk structures are produced by the contour and hatching passes. • Bulk sections contain coarse β-grains with a < 001 > fibre texture in the build direction. • This oscillates between a random distribution around the axis and cube reinforcement. • In the skin layer nucleation occurs off the surrounding powder bed and growth occurs inwards. • Simulations show that a weak α-texture results from a random distribution across habit variants.« less
Temporal evolution of financial-market correlations.
Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Temporal evolution of financial-market correlations
NASA Astrophysics Data System (ADS)
Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
ERIC Educational Resources Information Center
Kong, Nan
2007-01-01
In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
Transport behaviors of locally fractional coupled Brownian motors with fluctuating interactions
NASA Astrophysics Data System (ADS)
Wang, Huiqi; Ni, Feixiang; Lin, Lifeng; Lv, Wangyong; Zhu, Hongqiang
2018-09-01
In some complex viscoelastic mediums, it is ubiquitous that absorbing and desorbing surrounding Brownian particles randomly occur in coupled systems. The conventional method is to model a variable-mass system driven by both multiplicative and additive noises. In this paper, an improved mathematical model is created based on generalized Langevin equations (GLE) to characterize the random interaction with locally fluctuating number of coupled particles in the elastically coupled factional Brownian motors (FBM). By the numerical simulations, the effect of fluctuating interactions on collective transport behaviors is investigated, and some abnormal phenomena, such as cooperative behaviors, stochastic resonance (SR) and anomalous transport, are observed in the regime of sub-diffusion.
Implicit learning of non-spatial sequences in schizophrenia
MARVEL, CHERIE L.; SCHWARTZ, BARBARA L.; HOWARD, DARLENE V.; HOWARD, JAMES H.
2006-01-01
Recent studies have reported abnormal implicit learning of sequential patterns in patients with schizophrenia. Because these studies were based on visuospatial cues, the question remained whether patients were impaired simply due to the demands of spatial processing. This study examined implicit sequence learning in 24 patients with schizophrenia and 24 healthy controls using a non-spatial variation of the serial reaction time test (SRT) in which pattern stimuli alternated with random stimuli on every other trial. Both groups showed learning by responding faster and more accurately to pattern trials than to random trials. Patients, however, showed a smaller magnitude of sequence learning. Both groups were unable to demonstrate explicit knowledge of the nature of the pattern, confirming that learning occurred without awareness. Clinical variables were not correlated with the patients' learning deficits. Patients with schizophrenia have a decreased ability to develop sensitivity to regularly occurring sequences of events within their environment. This type of deficit may affect an array of cognitive and motor functions that rely on the perception of event regularity. PMID:16248901
Novel approaches to pin cluster synchronization on complex dynamical networks in Lur'e forms
NASA Astrophysics Data System (ADS)
Tang, Ze; Park, Ju H.; Feng, Jianwen
2018-04-01
This paper investigates the cluster synchronization of complex dynamical networks consisted of identical or nonidentical Lur'e systems. Due to the special topology structure of the complex networks and the existence of stochastic perturbations, a kind of randomly occurring pinning controller is designed which not only synchronizes all Lur'e systems in the same cluster but also decreases the negative influence among different clusters. Firstly, based on an extended integral inequality, the convex combination theorem and S-procedure, the conditions for cluster synchronization of identical Lur'e networks are derived in a convex domain. Secondly, randomly occurring adaptive pinning controllers with two independent Bernoulli stochastic variables are designed and then sufficient conditions are obtained for the cluster synchronization on complex networks consisted of nonidentical Lur'e systems. In addition, suitable control gains for successful cluster synchronization of nonidentical Lur'e networks are acquired by designing some adaptive updating laws. Finally, we present two numerical examples to demonstrate the validity of the control scheme and the theoretical analysis.
Contextuality in canonical systems of random variables
NASA Astrophysics Data System (ADS)
Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.
2017-10-01
Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.
NASA Astrophysics Data System (ADS)
Cheng, X. Y.; Wang, H. B.; Jia, Y. L.; Dong, YH
2018-05-01
In this paper, an open-closed-loop iterative learning control (ILC) algorithm is constructed for a class of nonlinear systems subjecting to random data dropouts. The ILC algorithm is implemented by a networked control system (NCS), where only the off-line data is transmitted by network while the real-time data is delivered in the point-to-point way. Thus, there are two controllers rather than one in the control system, which makes better use of the saved and current information and thereby improves the performance achieved by open-loop control alone. During the transfer of off-line data between the nonlinear plant and the remote controller data dropout occurs randomly and the data dropout rate is modeled as a binary Bernoulli random variable. Both measurement and control data dropouts are taken into consideration simultaneously. The convergence criterion is derived based on rigorous analysis. Finally, the simulation results verify the effectiveness of the proposed method.
Hierarchical Hopping through Localized States in a Random Potential
NASA Astrophysics Data System (ADS)
Rajan, Harihar; Srivastava, Vipin
2003-03-01
Generalisation of Mott's idea on (low - temperature, large-time), Variable-range-hopping is considered to include hopping at some what higher temperature(that do not kill localization). These transitions complement the variable- range-hopping in that they do not conserve energy and occur at relatively lower time scales. The hopper picks the next state in a hierarchical fashion in accordance with certain conditions. The results are found to tie up nicely with an interesting property pertaining to the energy dependence of localized states. Acknowlwdgements: One of us(VS) would like to thank Association of Commonwealth Universities and Leverhulme Trust for financial help and to Sir Sam Edwards for hospitality at Cavendish Laboratory,Cambridge CB3 0HE.
Barakat, Lamia P.; Schwartz, Lisa A.; Salamon, Katherine S.; Radcliffe, Jerilynn
2010-01-01
The study had two aims--to determine the efficacy of a family-based cognitive-behavioral pain management intervention for adolescents with sickle cell disease (SCD) in (1) reducing pain and improving health-related variables and (2) improving psychosocial outcomes. Each adolescent and a family support person were randomly assigned to receive a brief pain intervention (PAIN) (n = 27) or a disease education attention control intervention (DISEASE ED) (n = 26) delivered at home. Assessment of primary pain and health-related variables (health service use, pain coping, pain-related hindrance of goals) and secondary psychosocial outcomes (disease knowledge, disease self-efficacy, and family communication) occurred at baseline (prior to randomization), post-intervention, and one-year follow-up. Change on outcomes did not differ significantly by group at either time point. When groups were combined in exploratory analyses, there was evidence of small to medium effects of intervention on health-related and psychosocial variables. Efforts to address barriers to participation and improve feasibility of psychosocial interventions for pediatric SCD are critical to advancing development of effective treatments for pain. Sample size was insufficient to adequately test efficacy, and analyses did not support this focused cognitive-behavioral pain management intervention in this sample of adolescents with SCD. Exploratory analyses suggest that comprehensive interventions, that address a broad range of skills related to disease management and adolescent health concerns, may be more effective in supporting teens during healthcare transition. PMID:20686425
A review of selection-based tests of abiotic surrogates for species representation.
Beier, Paul; Sutcliffe, Patricia; Hjort, Jan; Faith, Daniel P; Pressey, Robert L; Albuquerque, Fabio
2015-06-01
Because conservation planners typically lack data on where species occur, environmental surrogates--including geophysical settings and climate types--have been used to prioritize sites within a planning area. We reviewed 622 evaluations of the effectiveness of abiotic surrogates in representing species in 19 study areas. Sites selected using abiotic surrogates represented more species than an equal number of randomly selected sites in 43% of tests (55% for plants) and on average improved on random selection of sites by about 8% (21% for plants). Environmental diversity (ED) (42% median improvement on random selection) and biotically informed clusters showed promising results and merit additional testing. We suggest 4 ways to improve performance of abiotic surrogates. First, analysts should consider a broad spectrum of candidate variables to define surrogates, including rarely used variables related to geographic separation, distance from coast, hydrology, and within-site abiotic diversity. Second, abiotic surrogates should be defined at fine thematic resolution. Third, sites (the landscape units prioritized within a planning area) should be small enough to ensure that surrogates reflect species' environments and to produce prioritizations that match the spatial resolution of conservation decisions. Fourth, if species inventories are available for some planning units, planners should define surrogates based on the abiotic variables that most influence species turnover in the planning area. Although species inventories increase the cost of using abiotic surrogates, a modest number of inventories could provide the data needed to select variables and evaluate surrogates. Additional tests of nonclimate abiotic surrogates are needed to evaluate the utility of conserving nature's stage as a strategy for conservation planning in the face of climate change. © 2015 Society for Conservation Biology.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Stochastic effects in EUV lithography: random, local CD variability, and printing failures
NASA Astrophysics Data System (ADS)
De Bisschop, Peter
2017-10-01
Stochastic effects in lithography are usually quantified through local CD variability metrics, such as line-width roughness or local CD uniformity (LCDU), and these quantities have been measured and studied intensively, both in EUV and optical lithography. Next to the CD-variability, stochastic effects can also give rise to local, random printing failures, such as missing contacts or microbridges in spaces. When these occur, there often is no (reliable) CD to be measured locally, and then such failures cannot be quantified with the usual CD-measuring techniques. We have developed algorithms to detect such stochastic printing failures in regular line/space (L/S) or contact- or dot-arrays from SEM images, leading to a stochastic failure metric that we call NOK (not OK), which we consider a complementary metric to the CD-variability metrics. This paper will show how both types of metrics can be used to experimentally quantify dependencies of stochastic effects to, e.g., CD, pitch, resist, exposure dose, etc. As it is also important to be able to predict upfront (in the OPC verification stage of a production-mask tape-out) whether certain structures in the layout are likely to have a high sensitivity to stochastic effects, we look into the feasibility of constructing simple predictors, for both stochastic CD-variability and printing failure, that can be calibrated for the process and exposure conditions used and integrated into the standard OPC verification flow. Finally, we briefly discuss the options to reduce stochastic variability and failure, considering the entire patterning ecosystem.
Does Mother Know Best? Treatment Adherence as a Function of Anticipated Treatment Benefit
Glymour, M. Maria; Nguyen, Quynh; Matsouaka, Roland; Tchetgen Tchetgen, Eric J.; Schmidt, Nicole M.; Osypuk, Theresa L.
2016-01-01
Background We describe bias resulting from individualized treatment selection, which occurs when treatment has heterogeneous effects and individuals selectively choose treatments of greatest benefit to themselves. This pernicious bias may confound estimates from observational studies and lead to important misinterpretation of intent-to-treat analyses of randomized trials. Despite the potentially serious threat to inferences, individualized treatment selection has rarely been formally described or assessed. Methods The Moving to Opportunity (MTO) trial randomly assigned subsidized rental vouchers to low-income families in high-poverty public housing. We assessed the Kessler-6 psychological distress and Behavior Problems Index outcomes for 2,829 adolescents 4–7 years after randomization. Among families randomly assigned to receive vouchers, we estimated probability of moving (treatment), predicted by pre-randomization characteristics (c-statistic=0.63). We categorized families into tertiles of this estimated probability of moving, and compared instrumental variable effect estimates for moving on Behavior Problems Index and Kessler-6 across tertiles. Results Instrumental variable estimated effects of moving on behavioral problems index were most adverse for boys least likely to move (b=0.93; 95% CI: 0.33, 1.53) compared to boys most likely to move (b=0.14; 95% CI: −0.15, 0.44; p=.02 for treatment*tertile interaction). Effects on Kessler-6 were more beneficial for girls least likely to move compared to girls most likely to move (−0.62 vs. 0.02; interaction p=.03). Conclusions Evidence of Individualized treatment selection differed by child gender and outcome and should be evaluated in randomized trial reports, especially when heterogeneous treatment effects are likely and non-adherence is common. PMID:26628424
On the minimum of independent geometrically distributed random variables
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David
1994-01-01
The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.
Students' Misconceptions about Random Variables
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2012-01-01
This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)
Approximation algorithms for the min-power symmetric connectivity problem
NASA Astrophysics Data System (ADS)
Plotnikov, Roman; Erzin, Adil; Mladenovic, Nenad
2016-10-01
We consider the NP-hard problem of synthesis of optimal spanning communication subgraph in a given arbitrary simple edge-weighted graph. This problem occurs in the wireless networks while minimizing the total transmission power consumptions. We propose several new heuristics based on the variable neighborhood search metaheuristic for the approximation solution of the problem. We have performed a numerical experiment where all proposed algorithms have been executed on the randomly generated test samples. For these instances, on average, our algorithms outperform the previously known heuristics.
Does the central limit theorem always apply to phase noise? Some implications for radar problems
NASA Astrophysics Data System (ADS)
Gray, John E.; Addison, Stephen R.
2017-05-01
The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.
Doidge, James C
2018-02-01
Population-based cohort studies are invaluable to health research because of the breadth of data collection over time, and the representativeness of their samples. However, they are especially prone to missing data, which can compromise the validity of analyses when data are not missing at random. Having many waves of data collection presents opportunity for participants' responsiveness to be observed over time, which may be informative about missing data mechanisms and thus useful as an auxiliary variable. Modern approaches to handling missing data such as multiple imputation and maximum likelihood can be difficult to implement with the large numbers of auxiliary variables and large amounts of non-monotone missing data that occur in cohort studies. Inverse probability-weighting can be easier to implement but conventional wisdom has stated that it cannot be applied to non-monotone missing data. This paper describes two methods of applying inverse probability-weighting to non-monotone missing data, and explores the potential value of including measures of responsiveness in either inverse probability-weighting or multiple imputation. Simulation studies are used to compare methods and demonstrate that responsiveness in longitudinal studies can be used to mitigate bias induced by missing data, even when data are not missing at random.
Fatigue Tests with Random Flight Simulation Loading
NASA Technical Reports Server (NTRS)
Schijve, J.
1972-01-01
Crack propagation was studied in a full-scale wing structure under different simulated flight conditions. Omission of low-amplitude gust cycles had a small effect on the crack rate. Truncation of the infrequently occurring high-amplitude gust cycles to a lower level had a noticeably accelerating effect on crack growth. The application of fail-safe load (100 percent limit load) effectively stopped subsequent crack growth under resumed flight-simulation loading. In another flight-simulation test series on sheet specimens, the variables studied are the design stress level and the cyclic frequency of the random gust loading. Inflight mean stresses vary from 5.5 to 10.0 kg/sq mm. The effect of the stress level is larger for the 2024 alloy than for the 7075 alloy. Three frequencies were employed: namely, 10 cps, 1 cps, and 0.1 cps. The frequency effect was small. The advantages and limitations of flight-simulation tests are compared with those of alternative test procedures such as constant-amplitude tests, program tests, and random-load tests. Various testing purposes are considered. The variables of flight-simulation tests are listed and their effects are discussed. A proposal is made for performing systematic flight-simulation tests in such a way that the compiled data may be used as a source of reference.
Merli, Mauro; Moscatelli, Marco; Mariotti, Giorgia; Piemontese, Matteo; Nieri, Michele
2012-02-01
To compare immediate versus early non-occlusal loading of dental implants placed flapless in a 3-year, parallel group, randomized clinical trial. The study was conducted in a private dental clinic between July 2005 and July 2010. Patients 18 years or older were randomized to receive implants for fixed partial dentures in cases of partial edentulism. The test group was represented by immediate non-occlusal implant loading, whereas the control group was represented by early non-occlusal implant loading. The outcome variables were implant failure, complications and radiographic bone level at implant sites 3 years after loading, measured from the implant-abutment junction to the most coronal point of bone-to-implant contact. Randomization was computer-generated with allocation concealment by opaque sequentially numbered sealed envelopes, and the measurer was blinded to group assignment. Sixty patients were randomized: 30 to the immediately loaded group and 30 to the early loaded group. Four patients dropped out; however, the data of all patients were included in the analysis. No implant failure occurred. Two complications occurred in the control group and one in the test group. The mean bone level at 3 years was 1.91 mm for test group and 1.59 mm for control group. The adjusted difference in bone level was 0.26 mm (CI 95% -0.08 to 0.59, p = 0.1232). The null hypothesis of no difference in failure rates, complications and bone level between implants that were loaded immediately or early at 3 years cannot be rejected in this randomized clinical trial. © 2011 John Wiley & Sons A/S.
A Multivariate Randomization Text of Association Applied to Cognitive Test Results
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Beard, Bettina
2009-01-01
Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.
Accounting for range uncertainties in the optimization of intensity modulated proton therapy.
Unkelbach, Jan; Chan, Timothy C Y; Bortfeld, Thomas
2007-05-21
Treatment plans optimized for intensity modulated proton therapy (IMPT) may be sensitive to range variations. The dose distribution may deteriorate substantially when the actual range of a pencil beam does not match the assumed range. We present two treatment planning concepts for IMPT which incorporate range uncertainties into the optimization. The first method is a probabilistic approach. The range of a pencil beam is assumed to be a random variable, which makes the delivered dose and the value of the objective function a random variable too. We then propose to optimize the expectation value of the objective function. The second approach is a robust formulation that applies methods developed in the field of robust linear programming. This approach optimizes the worst case dose distribution that may occur, assuming that the ranges of the pencil beams may vary within some interval. Both methods yield treatment plans that are considerably less sensitive to range variations compared to conventional treatment plans optimized without accounting for range uncertainties. In addition, both approaches--although conceptually different--yield very similar results on a qualitative level.
L-ornithine-L-aspartate infusion efficacy in hepatic encephalopathy.
Ahmad, Irfan; Khan, Anwaar A; Alam, Altaf; Dilshad, Akif; Butt, Arshad Kamal; Shafqat, Farzana; Malik, Kashif; Sarwar, Shahid
2008-11-01
To determine the efficacy of L-ornithine-L-aspartate in treatment of hepatic encephalopathy. Randomized, placebo-controlled trial. Department of Gastroenterology and Hepatology, Sheikh Zayed Hospital, Lahore, from February to August 2005. Cirrhotic patients with hyperammonemia and overt hepatic encephalopathy were enrolled. Eighty patients were randomized to two treatment groups, L-ornithine-L-aspartate (20 g/d) or placebo, both dissolved in 250 mL of 5% dextrose water and infused intravenously for four hours a day for five consecutive days with 0.5 g/kg dietary protein intake at the end of daily treatment period. Outcome variables were postprandial blood ammonia and mental state grade. Adverse reactions and mortality were also determined. Both treatment groups were comparable regarding age, gender, etiology of cirrhosis, Child-Pugh class, mental state grade and blood ammonia at baseline. Although, improvement occurred in both groups, there was a greater improvement in L-ornithine-L-aspartate group with regard to both variables. Four patients in the placebo group and 2 in L-ornithine-L-aspartate group died. L-ornithine-L-aspartate infusions were found to be effective in cirrhotic patients with hepatic encephalopathy.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Analysis on flood generation processes by means of a continuous simulation model
NASA Astrophysics Data System (ADS)
Fiorentino, M.; Gioia, A.; Iacobellis, V.; Manfreda, S.
2006-03-01
In the present research, we exploited a continuous hydrological simulation to investigate on key variables responsible of flood peak formation. With this purpose, a distributed hydrological model (DREAM) is used in cascade with a rainfall generator (IRP-Iterated Random Pulse) to simulate a large number of extreme events providing insight into the main controls of flood generation mechanisms. Investigated variables are those used in theoretically derived probability distribution of floods based on the concept of partial contributing area (e.g. Iacobellis and Fiorentino, 2000). The continuous simulation model is used to investigate on the hydrological losses occurring during extreme events, the variability of the source area contributing to the flood peak and its lag-time. Results suggest interesting simplification for the theoretical probability distribution of floods according to the different climatic and geomorfologic environments. The study is applied to two basins located in Southern Italy with different climatic characteristics.
Patterned feeding experience for preterm infants: study protocol for a randomized controlled trial.
Pickler, Rita H; Wetzel, Paul A; Meinzen-Derr, Jareen; Tubbs-Cooley, Heather L; Moore, Margo
2015-06-04
Neurobehavioral disabilities occur in 5-15% of preterm infants with an estimated 50-70% of very low birth weight preterm infants experiencing later dysfunction, including cognitive, behavioral, and social delays that often persist into adulthood. Factors implicated in poor neurobehavioral and developmental outcomes are hospitalization in the neonatal intensive care unit (NICU) and inconsistent caregiving patterns. Although much underlying brain damage occurs in utero or shortly after birth, neuroprotective strategies can stop lesions from progressing, particularly when these strategies are used during the most sensitive periods of neural plasticity occurring months before term age. The purpose of this randomized trial is to test the effect of a patterned feeding experience on preterm infants' neurobehavioral organization and development, cognitive function, and clinical outcomes. This trial uses an experimental, longitudinal, 2-group design with 120 preterm infants. Infants are enrolled within the first week of life and randomized to an experimental group receiving a patterned feeding experience from the first gavage feeding through discharge or to a control group receiving usual feeding care experience. The intervention involves a continuity of tactile experiences associated with feeding to train and build neuronal networks supportive of normal infant feeding experience. Primary outcomes are neurobehavioral organization as measured by Neurobehavioral Assessment of the Preterm Infant at 3 time points: the transition to oral feedings, NICU discharge, and 2 months corrected age. Secondary aims are cognitive function measured using the Bayley Scales of Infant and Toddler Development, Third Edition at 6 months corrected age, neurobehavioral development (sucking organization, feeding performance, and heart rate variability), and clinical outcomes (length of NICU stay and time to full oral feeding). The potential effects of demographic and biobehavioral factors (perinatal events and conditions of maternal or fetal/newborn origin and immunologic and genetic biomarkers) on the outcome variables will also be considered. Theoretically, the intervention provided at a critical time in neurologic system development and associated with a recurring event (feeding) should enhance neural connections that may be important for later development, particularly language and other cognitive and neurobehavioral organization skills. NCT01577615 11 April 2012.
Effect of vibration duration on human discomfort. [passenger comfort and random vibration
NASA Technical Reports Server (NTRS)
Clevenson, S. A.; Dempsey, T. K.; Leatherwood, J. D.
1978-01-01
The duration effects of random vertical vibration on passenger discomfort were studied in a simulated section of an aircraft cabin configured to seat six persons in tourist-class style. Variables of the study included time of exposure (0.25 min to 60 min) and the rms amplitude of vibration (0.025g to 0.100g). The vibrations had a white noise spectrum with a bandwidth of 10 Hz centered at 5 Hz. Data indicate that the discomfort threshold occurred at an rms vertical acceleration level of 0.027g for all durations of vibration. However, for acceleration levels that exceeded the discomfort threshold, a systematic decrease in discomfort occurred as a function of increasing duration of vibration. For the range of accelerations used, the magnitude of the discomfort decrement was shown to be independent of acceleration level. The results suggest that discomfort from vertical vibration applied in the frequency range at which humans are most sensitive decreases with longer exposure, which is the opposite of the recommendation of the International Standard ISO 2631-1974 (E) Guide for the Evaluation of Human Exposure to Whole-Body Vibration.
Witkowski, Maria Carolina; de Moraes, Maria Antonieta P.; Firpo, Cora Maria F.
2013-01-01
OBJECTIVE: To compare two systems of arterial catheters maintenance in postoperative pediatric surgery using intermittent or continuous infusion of heparin solution and to analyze adverse events related to the site of catheter insertion and the volume of infused heparin solution. METHODS: Randomized control trial with 140 patients selected for continuous infusion group (CIG) and intermittent infusion group (IIG). The variables analyzed were: type of heart disease, permanence time and size of the catheter, insertion site, technique used, volume of heparin solution and adverse events. The descriptive variables were analyzed by Student's t-test and the categorical variables, by chi-square test, being significant p<0.05. RESULTS: The median age was 11 (0-22) months, and 77 (55%) were females. No significant differences between studied variables were found, except for the volume used in CIG (12.0±1.2mL/24 hours) when compared to IIG (5.3±3.5mL/24 hours) with p<0.0003. CONCLUSIONS: The continuous infusion system and the intermittent infusion of heparin solution can be used for intra-arterial catheters maintenance in postoperative pediatric surgery, regardless of patient's clinical and demographic characteristics. Adverse events up to the third postoperative day occurred similarly in both groups. However, the intermittent infusion system usage in underweight children should be considered, due to the lower volume of infused heparin solution [ClinicalTrials.gov Identifier: NCT01097031]. PMID:24473958
Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S
2017-01-01
Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p's<0.05). In contrast, cueing with isochronous or randomly varying inter-stimulus/beat intervals removed the LRC in the stride cycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p's<0.05) but not the positive effect of biological variability. Stimulus variability and patients' propensity to synchronize play a critical role in fostering healthier gait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright © 2016 Elsevier B.V. All rights reserved.
An instrumental variable random-coefficients model for binary outcomes
Chesher, Andrew; Rosen, Adam M
2014-01-01
In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048
Polynomial chaos expansion with random and fuzzy variables
NASA Astrophysics Data System (ADS)
Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.
2016-06-01
A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
NASA Astrophysics Data System (ADS)
Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.
2017-07-01
This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.
Generation of dense plume fingers in saturated-unsaturated homogeneous porous media
NASA Astrophysics Data System (ADS)
Cremer, Clemens J. M.; Graf, Thomas
2015-02-01
Flow under variable-density conditions is widespread, occurring in geothermal reservoirs, at waste disposal sites or due to saltwater intrusion. The migration of dense plumes typically results in the formation of vertical plume fingers which are known to be triggered by material heterogeneity or by variations in source concentration that causes the density variation. Using a numerical groundwater model, six perturbation methods are tested under saturated and unsaturated flow conditions to mimic heterogeneity and concentration variations on the pore scale in order to realistically generate dense fingers. A laboratory-scale sand tank experiment is numerically simulated, and the perturbation methods are evaluated by comparing plume fingers obtained from the laboratory experiment with numerically simulated fingers. Dense plume fingering for saturated flow can best be reproduced with a spatially random, time-constant perturbation of the solute source. For unsaturated flow, a spatially and temporally random noise of solute concentration or a random conductivity field adequately simulate plume fingering.
2015-01-07
vector that helps to manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original...measures of risk. They view a random variable of interest in concert with an auxiliary random vector that helps to manage , predict and mitigate the risk
Raw and Central Moments of Binomial Random Variables via Stirling Numbers
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…
Cognitive Analytic Therapy for Bipolar Disorder: A Pilot Randomized Controlled Trial.
Evans, Mark; Kellett, Stephen; Heyland, Simon; Hall, Jo; Majid, Shazmin
2017-01-01
The evidence base for treatment of bipolar affective disorder (BD) demands the evaluation of new psychotherapies in order to broaden patient choice. This study reports on the feasibility, safety, helpfulness and effectiveness of cognitive analytic therapy (CAT). In a pilot randomized controlled trial, BD patients in remission were randomized to either receiving 24 sessions of CAT (n = 9) or treatment as usual (n = 9) and were assessed in terms of symptoms, functioning and service usage over time. In the CAT arm no adverse events occurred, 8/9 completed treatment, 5/8 attended all 24 sessions and 2/8 were categorized as recovered. The most common helpful event during CAT was recognition of patterns in mood variability, with helpfulness themes changing according to phase of therapy. No major differences were found when comparing the arms over time in terms of service usage or psychometric outcomes. The study suggests that conducting further research into the effectiveness of CAT in treating BD is warranted and guidance regarding future trials is provided. Copyright © 2016 John Wiley & Sons, Ltd. Treating BD with CAT appears feasible and safe. Retaining fidelity to the reformulation, recognition and revision structure of CAT appears useful. Participants stated that across the phases of CAT, focussing on patterns of mood variability was consistently helpful. Copyright © 2016 John Wiley & Sons, Ltd.
Reducing The Risk Of Fires In Conveyor Transport
NASA Astrophysics Data System (ADS)
Cheremushkina, M. S.; Poddubniy, D. A.
2017-01-01
The paper deals with the actual problem of increasing the safety of operation of belt conveyors in mines. Was developed the control algorithm that meets the technical requirements of the mine belt conveyors, reduces the risk of fires of conveyors belt, and enables energy and resource savings taking into account random sort of traffic. The most effective method of decision such tasks is the construction of control systems with the use of variable speed drives for asynchronous motors. Was designed the mathematical model of the system "variable speed multiengine drive - conveyor - control system of conveyors", that takes into account the dynamic processes occurring in the elements of the transport system, provides an assessment of the energy efficiency of application the developed algorithms, which allows to reduce the dynamic overload in the belt to (15-20)%.
A Random Variable Related to the Inversion Vector of a Partial Random Permutation
ERIC Educational Resources Information Center
Laghate, Kavita; Deshpande, M. N.
2005-01-01
In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.
A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables
ERIC Educational Resources Information Center
Vernizzi, Graziano; Nakai, Miki
2015-01-01
It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…
Kennedy, R R; Merry, A F
2011-09-01
Anaesthesia involves processing large amounts of information over time. One task of the anaesthetist is to detect substantive changes in physiological variables promptly and reliably. It has been previously demonstrated that a graphical trend display of historical data leads to more rapid detection of such changes. We examined the effect of a graphical indication of the magnitude of Trigg's Tracking Variable, a simple statistically based trend detection algorithm, on the accuracy and latency of the detection of changes in a micro-simulation. Ten anaesthetists each viewed 20 simulations with four variables displayed as the current value with a simple graphical trend display. Values for these variables were generated by a computer model, and updated every second; after a period of stability a change occurred to a new random value at least 10 units from baseline. In 50% of the simulations an indication of the rate of change was given by a five level graphical representation of the value of Trigg's Tracking Variable. Participants were asked to indicate when they thought a change was occurring. Changes were detected 10.9% faster with the trend indicator present (mean 13.1 [SD 3.1] cycles vs 14.6 [SD 3.4] cycles, 95% confidence interval 0.4 to 2.5 cycles, P = 0.013. There was no difference in accuracy of detection (median with trend detection 97% [interquartile range 95 to 100%], without trend detection 100% [98 to 100%]), P = 0.8. We conclude that simple statistical trend detection may speed detection of changes during routine anaesthesia, even when a graphical trend display is present.
ERIC Educational Resources Information Center
Frees, Edward W.; Kim, Jee-Seon
2006-01-01
Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…
Lucareli, P R; Lima, M O; Lima, F P S; de Almeida, J G; Brech, G C; D'Andréa Greve, J M
2011-09-01
Single-blind randomized, controlled clinical study. To evaluate, using kinematic gait analysis, the results obtained from gait training on a treadmill with body weight support versus those obtained with conventional gait training and physiotherapy. Thirty patients with sequelae from traumatic incomplete spinal cord injuries at least 12 months earlier; patients were able to walk and were classified according to motor function as ASIA (American Spinal Injury Association) impairment scale C or D. Patients were divided randomly into two groups of 15 patients by the drawing of opaque envelopes: group A (weight support) and group B (conventional). After an initial assessment, both groups underwent 30 sessions of gait training. Sessions occurred twice a week, lasted for 30 min each and continued for four months. All of the patients were evaluated by a single blinded examiner using movement analysis to measure angular and linear kinematic gait parameters. Six patients (three from group A and three from group B) were excluded because they attended fewer than 85% of the training sessions. There were no statistically significant differences in intra-group comparisons among the spatial-temporal variables in group B. In group A, the following significant differences in the studied spatial-temporal variables were observed: increases in velocity, distance, cadence, step length, swing phase and gait cycle duration, in addition to a reduction in stance phase. There were also no significant differences in intra-group comparisons among the angular variables in group B. However, group A achieved significant improvements in maximum hip extension and plantar flexion during stance. Gait training with body weight support was more effective than conventional physiotherapy for improving the spatial-temporal and kinematic gait parameters among patients with incomplete spinal cord injuries.
NASA Astrophysics Data System (ADS)
Pecháček, T.; Goosmann, R. W.; Karas, V.; Czerny, B.; Dovčiak, M.
2013-08-01
Context. We study some general properties of accretion disc variability in the context of stationary random processes. In particular, we are interested in mathematical constraints that can be imposed on the functional form of the Fourier power-spectrum density (PSD) that exhibits a multiply broken shape and several local maxima. Aims: We develop a methodology for determining the regions of the model parameter space that can in principle reproduce a PSD shape with a given number and position of local peaks and breaks of the PSD slope. Given the vast space of possible parameters, it is an important requirement that the method is fast in estimating the PSD shape for a given parameter set of the model. Methods: We generated and discuss the theoretical PSD profiles of a shot-noise-type random process with exponentially decaying flares. Then we determined conditions under which one, two, or more breaks or local maxima occur in the PSD. We calculated positions of these features and determined the changing slope of the model PSD. Furthermore, we considered the influence of the modulation by the orbital motion for a variability pattern assumed to result from an orbiting-spot model. Results: We suggest that our general methodology can be useful for describing non-monotonic PSD profiles (such as the trend seen, on different scales, in exemplary cases of the high-mass X-ray binary Cygnus X-1 and the narrow-line Seyfert galaxy Ark 564). We adopt a model where these power spectra are reproduced as a superposition of several Lorentzians with varying amplitudes in the X-ray-band light curve. Our general approach can help in constraining the model parameters and in determining which parts of the parameter space are accessible under various circumstances.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
Transition in the decay rates of stationary distributions of Lévy motion in an energy landscape.
Kaleta, Kamil; Lőrinczi, József
2016-02-01
The time evolution of random variables with Lévy statistics has the ability to develop jumps, displaying very different behaviors from continuously fluctuating cases. Such patterns appear in an ever broadening range of examples including random lasers, non-Gaussian kinetics, or foraging strategies. The penalizing or reinforcing effect of the environment, however, has been little explored so far. We report a new phenomenon which manifests as a qualitative transition in the spatial decay behavior of the stationary measure of a jump process under an external potential, occurring on a combined change in the characteristics of the process and the lowest eigenvalue resulting from the effect of the potential. This also provides insight into the fundamental question of what is the mechanism of the spatial decay of a ground state.
Spatial Variability of Dissolved Organic Carbon in Headwater Wetlands in Central Pennsylvania
NASA Astrophysics Data System (ADS)
Reichert-Eberhardt, A. J.; Wardrop, D.; Boyer, E. W.
2011-12-01
Dissolved organic carbon (DOC) is known to be of an important factor in many microbially mediated biochemical processes, such as denitrification, that occur in wetlands. The spatial variability of DOC within a wetland could impact the microbes that fuel these processes, which in turn can affect the ecosystem services provided by wetlands. However, the amount of spatial variability of DOC in wetlands is generally unknown. Furthermore, it is unknown how disturbance to wetlands can affect spatial variability of DOC. Previous research in central Pennsylvania headwater wetland soils has shown that wetlands with increased human disturbance had decreased heterogeneity in soil biochemistry. To address groundwater chemical variability 20 monitoring wells were installed in a random pattern in a 400 meter squared plot in a low-disturbance headwater wetland and a high-disturbance headwater wetland in central Pennsylvania. Water samples from these wells will be analyzed for DOC, dissolved inorganic carbon, nitrate, ammonia, and sulfate concentrations, as well as pH, conductivity, and temperature on a seasonal basis. It is hypothesized that there will be greater spatial variability of groundwater chemistry in the low disturbance wetland than the high disturbance wetland. This poster will present the initial data concerning DOC spatial variability in both the low and high impact headwater wetlands.
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
Effect of Random Thermal Spikes on Stirling Convertor Heater Head Reliability
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Halford, Gary R.
2004-01-01
Onboard radioisotope power systems being developed to support future NASA exploration missions require reliable design lifetimes of up to 14 yr and beyond. The structurally critical heater head of the high-efficiency developmental Stirling power convertor has undergone extensive computational analysis of operating temperatures (up to 650 C), stresses, and creep resistance of the thin-walled Inconel 718 bill of material. Additionally, assessment of the effect of uncertainties in the creep behavior of the thin-walled heater head, the variation in the manufactured thickness, variation in control temperature, and variation in pressure on the durability and reliability were performed. However, it is possible for the heater head to experience rare incidences of random temperature spikes (excursions) of short duration. These incidences could occur randomly with random magnitude and duration during the desired mission life. These rare incidences could affect the creep strain rate and therefore the life. The paper accounts for these uncertainties and includes the effect of such rare incidences, random in nature, on the reliability. The sensitivities of variables affecting the reliability are quantified and guidelines developed to improve the reliability are outlined. Furthermore, the quantified reliability is being verified with test data from the accelerated benchmark tests being conducted at the NASA Glenn Research Center.
Rubin, Ilan N; Ellner, Stephen P; Kessler, André; Morrell, Kimberly A
2015-09-01
1. Plant induced resistance to herbivory affects the spatial distribution of herbivores, as well as their performance. In recent years, theories regarding the benefit to plants of induced resistance have shifted from ideas of optimal resource allocation towards a more eclectic set of theories that consider spatial and temporal plant variability and the spatial distribution of herbivores among plants. However, consensus is lacking on whether induced resistance causes increased herbivore aggregation or increased evenness, as both trends have been experimentally documented. 2. We created a spatial individual-based model that can describe many plant-herbivore systems with induced resistance, in order to analyse how different aspects of induced resistance might affect herbivore distribution, and the total damage to a plant population, during a growing season. 3. We analyse the specific effects on herbivore aggregation of informed herbivore movement (preferential movement to less-damaged plants) and of information transfer between plants about herbivore attacks, in order to identify mechanisms driving both aggregation and evenness. We also investigate how the resulting herbivore distributions affect the total damage to plants and aggregation of damage. 4. Even, random and aggregated herbivore distributions can all occur in our model with induced resistance. Highest levels of aggregation occurred in the models with informed herbivore movement, and the most even distributions occurred when the average number of herbivores per plant was low. With constitutive resistance, only random distributions occur. Damage to plants was spatially correlated, unless plants recover very quickly from damage; herbivore spatial autocorrelation was always weak. 5. Our model and results provide a simple explanation for the apparent conflict between experimental results, indicating that both increased aggregation and increased evenness of herbivores can result from induced resistance. We demonstrate that information transfer from plants to herbivores, and from plants to neighbouring plants, can both be major factors in determining non-random herbivore distributions. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin
2017-08-04
In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.
NASA Astrophysics Data System (ADS)
Xue, Xiaofeng
2016-12-01
In this paper we are concerned with the contact process with random recovery rates on open clusters of bond percolation on Z^d. Let ξ be a random variable such that P(ξ ≥ 1)=1, which ensures E1/ξ <+∞, then we assign i. i. d. copies of ξ on the vertices as the random recovery rates. Assuming that each edge is open with probability p and the infection can only spread through the open edges, then we obtain that limsup _{d→ +∞}λ _d≤ λ _c=1/pE{1}/{ξ}, where λ _d is the critical value of the process on Z^d, i.e., the maximum of the infection rates with which the infection dies out with probability one when only the origin is infected at t=0. To prove the above main result, we show that the following phase transition occurs. Assuming that lceil log drceil vertices are infected at t=0, where these vertices can be located anywhere, then when the infection rate λ >λ _c, the process survives with high probability as d→ +∞ while when λ <λ _c, the process dies out at time O(log d) with high probability.
Qualitatively Assessing Randomness in SVD Results
NASA Astrophysics Data System (ADS)
Lamb, K. W.; Miller, W. P.; Kalra, A.; Anderson, S.; Rodriguez, A.
2012-12-01
Singular Value Decomposition (SVD) is a powerful tool for identifying regions of significant co-variability between two spatially distributed datasets. SVD has been widely used in atmospheric research to define relationships between sea surface temperatures, geopotential height, wind, precipitation and streamflow data for myriad regions across the globe. A typical application for SVD is to identify leading climate drivers (as observed in the wind or pressure data) for a particular hydrologic response variable such as precipitation, streamflow, or soil moisture. One can also investigate the lagged relationship between a climate variable and the hydrologic response variable using SVD. When performing these studies it is important to limit the spatial bounds of the climate variable to reduce the chance of random co-variance relationships being identified. On the other hand, a climate region that is too small may ignore climate signals which have more than a statistical relationship to a hydrologic response variable. The proposed research seeks to identify a qualitative method of identifying random co-variability relationships between two data sets. The research identifies the heterogeneous correlation maps from several past results and compares these results with correlation maps produced using purely random and quasi-random climate data. The comparison identifies a methodology to determine if a particular region on a correlation map may be explained by a physical mechanism or is simply statistical chance.
Design approaches to experimental mediation☆
Pirlott, Angela G.; MacKinnon, David P.
2016-01-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259
Design approaches to experimental mediation.
Pirlott, Angela G; MacKinnon, David P
2016-09-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
Maruyama, Takeo; Kimura, Motoo
1980-01-01
If a population (species) consists of n haploid lines (subpopulations) which reproduce asexually and each of which is subject to random extinction and subsequent replacement, it is shown that, at equilibrium in which mutational production of new alleles and their random extinction balance each other, the genetic diversity (1 minus the sum of squares of allelic frequencies) is given by 2Nev/(1 + 2Nev), where [Formula: see text] in which Ñ is the harmonic mean of the population size per line, n is the number of lines (assumed to be large), λ is the rate of line extinction, and v is the mutation rate (assuming the infinite neutral allele model). In a diploid population (species) consisting of n colonies, if migration takes place between colonies at the rate m (the island model) in addition to extinction and recolonization of colonies, it is shown that effective population size is [Formula: see text] If the rate of colony extinction (λ) is much larger than the migration rate of individuals, the effective population size is greatly reduced compared with the case in which no colony extinctions occur (in which case Ne = nÑ). The stepping-stone type of recolonization scheme is also considered. Bearing of these results on the interpretation of the level of genetic variability at the enzyme level observed in natural populations is discussed from the standpoint of the neutral mutation-random drift hypothesis. PMID:16592920
Testing a workplace physical activity intervention: a cluster randomized controlled trial
2011-01-01
Background Increased physical activity levels benefit both an individuals' health and productivity at work. The purpose of the current study was to explore the impact and cost-effectiveness of a workplace physical activity intervention designed to increase physical activity levels. Methods A total of 1260 participants from 44 UK worksites (based within 5 organizations) were recruited to a cluster randomized controlled trial with worksites randomly allocated to an intervention or control condition. Measurement of physical activity and other variables occurred at baseline, and at 0 months, 3 months and 9 months post-intervention. Health outcomes were measured during a 30 minute health check conducted in worksites at baseline and 9 months post intervention. The intervention consisted of a 3 month tool-kit of activities targeting components of the Theory of Planned Behavior, delivered in-house by nominated facilitators. Self-reported physical activity (measured using the IPAQ short-form) and health outcomes were assessed. Results and discussion Multilevel modelling found no significant effect of the intervention on MET minutes of activity (from the IPAQ) at any of the follow-up time points controlling for baseline activity. However, the intervention did significantly reduce systolic blood pressure (B = -1.79 mm/Hg) and resting heart rate (B = -2.08 beats) and significantly increased body mass index (B = .18 units) compared to control. The intervention was found not to be cost-effective, however the substantial variability round this estimate suggested that further research is warranted. Conclusions The current study found mixed support for this worksite physical activity intervention. The paper discusses some of the tensions involved in conducting rigorous evaluations of large-scale randomized controlled trials in real-world settings. Trial registration Current controlled trials ISRCTN08807396 PMID:21481265
Compliance-Effect Correlation Bias in Instrumental Variables Estimators
ERIC Educational Resources Information Center
Reardon, Sean F.
2010-01-01
Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
Inpatient versus Outpatient Management of TIA or Minor Stroke: Clinical Outcome.
Majidi, Shahram; Leon Guerrero, Christopher R; Burger, Kathleen M; Rothrock, John F
2017-06-01
The management of patients with acute transient ischemic attack (TIA) or minor stroke is highly variable. Whether hospitalization of such patients significantly improves short-term clinical outcome is unknown. We assessed the short-term clinical outcome associated with inpatient versus outpatient management of patients with TIA or minor stroke. We evaluated a consecutive series of patients with acute TIA or minor ischemic stroke (NIH Stroke Scale score ≤ 3) presenting to a single emergency department (ED). We randomized patients to either hospital-based or outpatient-based management. All patients underwent interview and examination 7-10 days following the index event. This study included 100 patients, 41 with TIA and 59 with minor stroke. Nineteen (46%) of the TIA patients and 29 (49%) of the minor stroke patients randomized to hospital management, and the remaining 22 TIA patients and 30 minor stroke patients randomized to outpatient-based management. In the patients with a minor stroke, neurologic worsening occurred in 6 out of 29 (21%) in the inpatient arm compared with 3 out of 30 (10%) in the outpatient arm ( p = 0.3). In none of these cases was acute interventional therapy or need for urgent admission considered medically appropriate. In the patients with a TIA, recurrence of a TIA occurred in 2 out of 19 (11%) in the inpatient arm compared with 2 out of 22 (9%) in the outpatient arm ( p = 1). None of the patients with a TIA randomized to the inpatient arm experienced a stroke compared with 1 out of 22 in the outpatient arm ( p = 1). There were no deaths in either group. Routine hospitalization of all patients with TIA or minor ischemic stroke may not positively affect short-term clinical outcome.
Inpatient versus Outpatient Management of TIA or Minor Stroke: Clinical Outcome
Majidi, Shahram; Leon Guerrero, Christopher R.; Burger, Kathleen M.; Rothrock, John F.
2017-01-01
Background The management of patients with acute transient ischemic attack (TIA) or minor stroke is highly variable. Whether hospitalization of such patients significantly improves short-term clinical outcome is unknown. We assessed the short-term clinical outcome associated with inpatient versus outpatient management of patients with TIA or minor stroke. Methods We evaluated a consecutive series of patients with acute TIA or minor ischemic stroke (NIH Stroke Scale score ≤ 3) presenting to a single emergency department (ED). We randomized patients to either hospital-based or outpatient-based management. All patients underwent interview and examination 7–10 days following the index event. Results This study included 100 patients, 41 with TIA and 59 with minor stroke. Nineteen (46%) of the TIA patients and 29 (49%) of the minor stroke patients randomized to hospital management, and the remaining 22 TIA patients and 30 minor stroke patients randomized to outpatient-based management. In the patients with a minor stroke, neurologic worsening occurred in 6 out of 29 (21%) in the inpatient arm compared with 3 out of 30 (10%) in the outpatient arm (p = 0.3). In none of these cases was acute interventional therapy or need for urgent admission considered medically appropriate. In the patients with a TIA, recurrence of a TIA occurred in 2 out of 19 (11%) in the inpatient arm compared with 2 out of 22 (9%) in the outpatient arm (p = 1). None of the patients with a TIA randomized to the inpatient arm experienced a stroke compared with 1 out of 22 in the outpatient arm (p = 1). There were no deaths in either group. Conclusion Routine hospitalization of all patients with TIA or minor ischemic stroke may not positively affect short-term clinical outcome. PMID:28702120
Anderson localization for radial tree-like random quantum graphs
NASA Astrophysics Data System (ADS)
Hislop, Peter D.; Post, Olaf
We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
ERIC Educational Resources Information Center
Bancroft, Stacie L.; Bourret, Jason C.
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
NASA Astrophysics Data System (ADS)
Ramesh, N.; Cane, M. A.
2017-12-01
The complex coupled ocean-atmosphere system of the Tropical Pacific generates variability on timescales from intraseasonal to multidecadal. Pacific Decadal Variability (PDV) is among the key drivers of global climate, with effects on hydroclimate on several continents, marine ecosystems, and the rate of global mean surface temperature rise under anthropogenic greenhouse gas forcing. Predicting phase shifts in the PDV would therefore be highly useful. However, the small number of PDV phase shifts that have occurred in the observational record pose a substantial challenge to developing an understanding of the mechanisms that underlie decadal variability. In this study, we use a 100,000-year unforced simulation from an intermediate-complexity model of the Tropical Pacific region that has been shown to produce PDV comparable to that in the real world. We apply the Simplex Projection method to the NINO3 index from this model to reconstruct a shadow manifold that preserves the topology of the true attractor of this system. We find that the high- and low-variance phases of PDV emerge as a pair of regimes in a 3-dimensional state space, and that the transitions between decadal states lie in a highly predictable region of the attractor. We then use a random forest algorithm to develop a physical interpretation of the processes associated with these highly-predictable transitions. We find that transitions to low-variance states are most likely to occur approximately 2.5 years after an El Nino event, and that ocean-atmosphere variables in the southeastern Tropical Pacific play a crucial role in driving these transitions.
Random Variables: Simulations and Surprising Connections.
ERIC Educational Resources Information Center
Quinn, Robert J.; Tomlinson, Stephen
1999-01-01
Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)
Binomial leap methods for simulating stochastic chemical kinetics.
Tian, Tianhai; Burrage, Kevin
2004-12-01
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.
Poor predictive value of breath hydrogen response for probiotic effects in IBS.
Yao, Chu K; Barrett, Jacqueline S; Philpott, Hamish; Chung, Alvin R T; van Langenberg, Daniel; Garg, Mayur; Gibson, Peter R
2015-12-01
Previous observations suggested that an early rise in breath hydrogen after lactulose (ERBHAL) may identify patients with irritable bowel syndrome (IBS) likely to respond to probiotics. Therefore, we aimed to (i) investigate whether treatment with a probiotic changes breath hydrogen response in patients with ERBHAL and (ii) whether these changes identify patients who may benefit symptomatically from probiotics. In a randomized, double-blind, placebo-controlled trial, patients with IBS (Rome III) were randomized to either 65 mL/day fermented milk product containing probiotic (FMPP) or placebo for 6 weeks, followed by 6 weeks' open-label treatment and 6 weeks' withdrawal. Breath hydrogen responses to lactulose (15 g) and liquid-gastric emptying time were evaluated before and at the end of each treatment period. Symptoms were measured using a 100-mm visual analog scale. Loss of ERBHAL occurred in 36% of 23 patients receiving FMPP and 41% of 22 receiving placebo (P = 1.00). Amongst 40 patients who completed open-label FMPP treatment, ERBHAL was lost in a further 38%, continued in 25%, and regained in 10%. Similar variability occurred in the withdrawal phase. Variability was unrelated to changes in gastric emptying. No differences in symptom response were seen between treatment groups nor in relation to the loss or retention of ERBHAL. Breath hydrogen patterns after lactulose are poorly reproducible. No FMPP-specific effects on fermentation patterns or symptoms were observed. The presence of ERBHAL is not useful to predict symptomatic response to probiotic therapy in patients with IBS. © 2015 Journal of Gastroenterology and Hepatology Foundation and Wiley Publishing Asia Pty Ltd.
Do bioclimate variables improve performance of climate envelope models?
Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.
2012-01-01
Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.
Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.
2013-01-01
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-09-01
Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.
Carter, J.; Merino, J.H.; Merino, S.L.
2009-01-01
Estimates of submerged aquatic vegetative (SAV) along the U.S. Gulf of Mexico (Gulf) generally focus on seagrasses. In 2000, we attempted a synoptic survey of SAV in the mesohaline (5-20 ppt) zone of estuarine and nearshore areas of the northeastern Gulf. Areas with SAV were identified from existing aerial 1992 photography, and a literature review was used to select those areas that were likely to experience mesohaline conditions during the growing season. In 2000, a drought year, we visited 217 randomly selected SAV beds and collected data on species composition and environmental conditions. In general, sites were either clearly polyhaline (2: 20 ppt) or oligohaline (S 5 ppt), with only five sites measuring between 5 and 20 ppt. Ruppia maritima L. (13-35 ppt, n = 28) was the only species that occurred in mesohaline salinities. Halodule wrightii Asch. occurred in 73% of the beds. The nonindigenous Myriophyllum spicatum L. was present in four locations with salinities below 3 ppt. No nonindigenous macroalgae were identified, and no nonindigenous angiosperms occurred in salinities above 3 ppt. Selecting sample locations based on historical salinity data was not a successful strategy for surveying SAV in mesohaline systems, particularly during a drought year. Our ability to locate SAV beds within 50 m of their aerially located position 8 yr later demonstrates some SAV stability in the highly variable conditions of the study area. ?? 2009 by the Marine Environmental Silences Consortium of Alabama.
Wezner-Ptasinska, Magdalena; Otlewski, Jacek
2015-12-01
Variable lymphocyte receptors (VLRs) are non-immunoglobulin components of adaptive immunity in jawless vertebrates. These proteins composed of leucine-rich repeat modules offer some advantages over antibodies in target binding and therefore are attractive candidates for biotechnological applications. In this paper we report the design and characterization of a phage display library based on a previously proposed dVLR scaffold containing six LRR modules [Wezner-Ptasinska et al., 2011]. Our library was designed based on a consensus approach in which the randomization scheme reflects the frequencies of amino acids naturally occurring in respective positions responsible for antigen recognition. We demonstrate general applicability of the scaffold by selecting dVLRs specific for lysozyme and S100A7 protein with KD values in the micromolar range. The dVLR library could be used as a convenient alternative to antibodies for effective isolation of high affinity binders.
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
Conradi, Una; Joffe, Ari R
2017-07-07
To determine a direct measure of publication bias by determining subsequent full-paper publication (P) of studies reported in animal research abstracts presented at an international conference (A). We selected 100 random (using a random-number generator) A from the 2008 Society of Critical Care Medicine Conference. Using a data collection form and study manual, we recorded methodology and result variables from A. We searched PubMed and EMBASE to June 2015, and DOAJ and Google Scholar to May 2017 to screen for subsequent P. Methodology and result variables were recorded from P to determine changes in reporting from A. Predictors of P were examined using Fisher's Exact Test. 62% (95% CI 52-71%) of studies described in A were subsequently P after a median 19 [IQR 9-33.3] months from conference presentation. Reporting of studies in A was of low quality: randomized 27% (the method of randomization and allocation concealment not described), blinded 0%, sample-size calculation stated 0%, specifying the primary outcome 26%, numbers given with denominators 6%, and stating number of animals used 47%. Only being an orally presented (vs. poster presented) A (14/16 vs. 48/84, p = 0.025) predicted P. Reporting of studies in P was of poor quality: randomized 39% (the method of randomization and allocation concealment not described), likely blinded 6%, primary outcome specified 5%, sample size calculation stated 0%, numbers given with denominators 34%, and number of animals used stated 56%. Changes in reporting from A to P occurred: from non-randomized to randomized 19%, from non-blinded to blinded 6%, from negative to positive outcomes 8%, from having to not having a stated primary outcome 16%, and from non-statistically to statistically significant findings 37%. Post-hoc, using publication data, P was predicted by having positive outcomes (published 62/62, unpublished 33/38; p = 0.003), or statistically significant results (published 58/62, unpublished 20/38; p < 0.001). Only 62% (95% CI 52-71%) of animal research A are subsequently P; this was predicted by oral presentation of the A, finally having positive outcomes, and finally having statistically significant results. Publication bias is prevalent in critical care animal research.
Enhancing Condom Use Among Black Male Youths: A Randomized Controlled Trial
Charnigo, Richard J.; Salazar, Laura F.; Pasternak, Ryan; Terrell, Ivy W.; Ricks, JaNelle; Smith, Rachel V.; Taylor, Stephanie N.
2014-01-01
Objectives. We tested the efficacy of a brief intervention to promote correct and consistent use of condoms among Black male youths attending sexually transmitted infection (STI) clinics in 3 southern US cities. Methods. In 2010 to 2012, we screened (n = 1102) and enrolled (n = 702) youths aged 15 to 23 years who identified as Black and reported recent (past 2 months) sexual activity and randomized them to a private, brief, interactive intervention (n = 349) or an attention-equivalent control condition (n = 353). Assessments occurred at baseline and 2 and 6 months after the intervention. Results. At 6 months, with adjustment for age and pretest nonequivalence of the outcome variable, an estimated odds ratio (EOR) of 1.63 (95% confidence interval [CI] = 1.07, 2.49; P = .02) indicated efficacy for correct condom use. An adjusted generalized estimating equations model with both 2- and 6-month condom use variables produced an EOR of 1.49 (95% CI = 1.06, 2.08; P = .02). We did not observe significant effects on chlamydia and gonorrhea incidence. Conclusions. This brief intervention, delivered as part of STI clinical care, could help alleviate the disproportionate STI–HIV burden among young Black men. PMID:25211749
2010-08-01
a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
NASA Astrophysics Data System (ADS)
McCaffrey, D. R.; Hopkinson, C.
2017-12-01
Alpine Treeline Ecotone (ATE), the transition zone between closed canopy forest and alpine tundra, is a prominent vegetation pattern in mountain regions. At continental scales, the elevation of ATE is negatively correlated with latitude and is generally explained by thermal limitations. However, at landscape scales, precipitation and moisture regimes can suppress ATE elevation below thermal limits, causing variability and patterning in ATE position. Recent studies have investigated the relative effects of hydroclimatic variables on ATE position at multiple scales, but less attention has been given to interactions between hydroclimatic variables and disturbance agents, such as fire. Observing change in the ATE at sufficient spatial resolution and temporal extent to identify correlations between topographic variables and disturbance agents has proved challenging. Recent advances in monoplotting have enabled the extraction of canopy cover information from oblique photography, at a resolution of 20 m. Using airborne lidar and repeat photography from the Mountain Legacy Project, we observed canopy cover change in West Castle Watershed (Alberta, Canada; 103 km2; 49.3° N, 114.4° W) over a 92-year period (i.e. 1914-2006). Two wildfires, occurring 1934 and 1936, affected 63% of the watershed area, providing an opportunity to contrast topographic patterns of mortality and succession in the ATE, while factoring by exposure to fire. Slope aspect was a strong predictor of mortality and succession: the frequency of mortality was four times higher in fire-exposed areas, with 72% of all mortality occurring on south- and east-facing slope aspects; the frequency of succession was balanced between fire-exposed and unexposed areas, with 66% of all succession occurred on north- and east-facing slope aspects. Given previous experiments have demonstrated that moisture limitation inhibits tree establishment, suppressing elevation of ATE below thermal growth boundaries, we hypothesize that moisture limitation is selectively acting on warm slope aspects to inhibit tree establishment, post-fire. Support for this hypothesis is provided by comparing hydrometeorological variable importance in a random forest model of land cover change in the watershed.
Interannual variability of global dust storms on Mars.
Haberle, R M
1986-10-24
Global dust storms on Mars occur in some years but not in others. If the four Mars years of Viking data are representative, some distinguishing characteristics can be inferred. In years with global dust storms, dust is raised in the southern hemisphere and spread over much of the planet by an intensified Hadley circulation. In years without global dust storms, dust is raised in the northern hemisphere by relatively active mid-latitude storm systems but does not spread globally. In both cases the dusty season is winter in the north. Assuming that the cross-equatorial Hadley circulation plays a key role in the onset of global dust storms, it is shown from numerical simulations that a northen hemisphere dust haze weakens its intensity and, hence, its contribution to the surface stress in the southern hemisphere. This, in turn, reduces the possibility of global dust storm development. The interannual variability is therefore the result either of a competition between circulations in opposite hemispheres, in which case the variability has a random component, or it is the result of the cycling of dust between hemispheres, in which case the variability is related to the characteristics of global dust storms themselves.
Digital image analysis of a turbulent flame
NASA Astrophysics Data System (ADS)
Zucherman, L.; Kawall, J. G.; Keffer, J. F.
1988-01-01
Digital image analysis of cine pictures of an unconfined rich premixed turbulent flame has been used to determine structural characteristics of the turbulent/non-turbulent interface of the flame. The results, comprising various moments of the interface position, probability density functions and correlation functions, establish that the instantaneous flame-interface position is essentially a Gaussian random variable with a superimposed quasi-periodical component. The latter is ascribable to a pulsation caused by the convection and the stretching of ring vortices present within the flame. To a first approximation, the flame can be considered similar to a three-dimensional axisymmetric turbulent jet, with superimposed ring vortices, in which combustion occurs.
Axelrod, David M; Sutherland, Scott M; Anglemyer, Andrew; Grimm, Paul C; Roth, Stephen J
2016-02-01
Acute kidney injury occurs commonly in children following congenital cardiac surgery with cardiopulmonary bypass and has been associated with increased morbidity and mortality. Aminophylline, a methylxanthine nonselective adenosine receptor antagonist, has been effective in the management of acute kidney injury in certain populations. This study sought to determine whether postoperative administration of aminophylline attenuates acute kidney injury in children undergoing congenital cardiac surgery with cardiopulmonary bypass. Single-center, double-blinded, placebo-controlled, randomized clinical trial. Tertiary center, pediatric cardiovascular ICU. A total of 144 children after congenital heart surgery with cardiopulmonary bypass. Seventy-two patients were randomized to receive aminophylline and 72 patients received placebo. Study drug was administered every 6 hours for 72 hours. The primary outcome variable was the development of any acute kidney injury, defined by the serum creatinine criteria of the Kidney Diseases: Improving Global Outcomes. Secondary outcomes included the development of severe acute kidney injury, time between cardiovascular ICU admission and first successful extubation, percent fluid overload, total fluid balance, urine output, bioelectrical impedance, and serum neutrophil gelatinase-associated lipocalin. The unadjusted rate and severity of acute kidney injury were not different between groups; 43 of 72 (60%) of the treatment group and 36 of 72 (50%) of the placebo group developed acute kidney injury (p = 0.32). Stage 2/3 acute kidney injury occurred in 23 of 72 (32%) of the treatment group and 15 of 72 (21%) of the placebo group (p = 0.18). Secondary outcome measures also demonstrated no significant difference between treatment and placebo groups. Aminophylline administration was safe; no deaths occurred in either group, and rates of adverse events were similar (14% in the treatment group vs 18% in the placebo group; p = 0.30). In this placebo-controlled randomized clinical trial, we found no effect of aminophylline to prevent acute kidney injury in children recovering from cardiac surgery performed with cardiopulmonary bypass. Future study of preoperative aminophylline administration to prevent acute kidney injury may be warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loubenets, Elena R.
We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less
Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H
1985-03-01
Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
ERIC Educational Resources Information Center
Vardeman, Stephen B.; Wendelberger, Joanne R.
2005-01-01
There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.
2017-02-01
This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.
Reuter, I.; Mehnert, S.; Leone, P.; Kaps, M.; Oechsner, M.; Engelhardt, M.
2011-01-01
Symptoms of Parkinson's disease (PD) progress despite optimized medical treatment. The present study investigated the effects of a flexibility and relaxation programme, walking, and Nordic walking (NW) on walking speed, stride length, stride length variability, Parkinson-specific disability (UPDRS), and health-related quality of life (PDQ 39). 90 PD patients were randomly allocated to the 3 treatment groups. Patients participated in a 6-month study with 3 exercise sessions per week, each lasting 70 min. Assessment after completion of the training showed that pain was reduced in all groups, and balance and health-related quality of life were improved. Furthermore, walking, and Nordic walking improved stride length, gait variability, maximal walking speed, exercise capacity at submaximal level, and PD disease-specific disability on the UPDRS in addition. Nordic walking was superior to the flexibility and relaxation programme and walking in improving postural stability, stride length, gait pattern and gait variability. No significant injuries occurred during the training. All patients of the Nordic walking group continued Nordic walking after completing the study. PMID:21603199
Do little interactions get lost in dark random forests?
Wright, Marvin N; Ziegler, Andreas; König, Inke R
2016-03-31
Random forests have often been claimed to uncover interaction effects. However, if and how interaction effects can be differentiated from marginal effects remains unclear. In extensive simulation studies, we investigate whether random forest variable importance measures capture or detect gene-gene interactions. With capturing interactions, we define the ability to identify a variable that acts through an interaction with another one, while detection is the ability to identify an interaction effect as such. Of the single importance measures, the Gini importance captured interaction effects in most of the simulated scenarios, however, they were masked by marginal effects in other variables. With the permutation importance, the proportion of captured interactions was lower in all cases. Pairwise importance measures performed about equal, with a slight advantage for the joint variable importance method. However, the overall fraction of detected interactions was low. In almost all scenarios the detection fraction in a model with only marginal effects was larger than in a model with an interaction effect only. Random forests are generally capable of capturing gene-gene interactions, but current variable importance measures are unable to detect them as interactions. In most of the cases, interactions are masked by marginal effects and interactions cannot be differentiated from marginal effects. Consequently, caution is warranted when claiming that random forests uncover interactions.
The influence of an uncertain force environment on reshaping trial-to-trial motor variability.
Izawa, Jun; Yoshioka, Toshinori; Osu, Rieko
2014-09-10
Motor memory is updated to generate ideal movements in a novel environment. When the environment changes every trial randomly, how does the brain incorporate this uncertainty into motor memory? To investigate how the brain adapts to an uncertain environment, we considered a reach adaptation protocol where individuals practiced moving in a force field where a noise was injected. After they had adapted, we measured the trial-to-trial variability in the temporal profiles of the produced hand force. We found that the motor variability was significantly magnified by the adaptation to the random force field. Temporal profiles of the motor variance were significantly dissociable between two different types of random force fields experienced. A model-based analysis suggests that the variability is generated by noise in the gains of the internal model. It further suggests that the trial-to-trial motor variability magnified by the adaptation in a random force field is generated by the uncertainty of the internal model formed in the brain as a result of the adaptation.
Burgess, Wendy; McIntyre, Jordan P. R.; Gunn, Alistair J.; Lear, Christopher A.; Bennet, Laura; Mitchell, Edwin A.; Thompson, John M. D.
2016-01-01
Key points Fetal behavioural state in healthy late gestation pregnancy is affected by maternal position.Fetal state 1F is more likely to occur in maternal supine or right lateral positions.Fetal state 4F is less likely to occur when the woman lies supine or semi‐recumbent.Fetal state change is more likely when the woman is supine or semi‐recumbent.Fetal heart rate variability is affected by maternal position with variability reduced in supine and semi‐recumbent positions. Abstract Fetal behavioural states (FBS) are measures of fetal wellbeing. In acute hypoxaemia, the human fetus adapts to a lower oxygen consuming state with changes in the cardiotocograph and reduced fetal activity. Recent studies of late gestation stillbirth described the importance of sleep position in the risk of intrauterine death. We designed this study to assess the effects of different maternal positions on FBS in healthy late gestation pregnancies under controlled conditions. Twenty‐nine healthy women had continuous fetal ECG recordings under standardized conditions in four randomly allocated positions, left lateral, right lateral, supine and semi‐recumbent. Two blinded observers, assigned fetal states in 5 min blocks. Measures of fetal heart rate variability were calculated from ECG beat to beat data. Compared to state 2F, state 4F was less likely to occur when women were semi‐recumbent [odds ratio (OR) = 0.11, 95% confidence interval (95% CI) 0.02, 0.55], and supine (OR = 0.27, 95% CI 0.07, 1.10). State 1F was more likely on the right (OR = 2.36, 95% CI 1.11, 5.04) or supine (OR = 4.99, 95% CI 2.41, 10.43) compared to the left. State change was more likely when the mother was semi‐recumbent (OR = 2.17, 95% CI 1.19, 3.95) or supine (OR = 2.67, 95% CI 1.46, 4.85). There was a significant association of maternal position to mean fetal heart rate. The measures of heart rate variability (SDNN and RMSSD) were reduced in both semi‐recumbent and supine positions. In healthy late gestation pregnancy, maternal position affects FBS and heart rate variability. These effects are likely fetal adaptations to positions which may produce a mild hypoxic stress. PMID:27871127
Stone, Peter R; Burgess, Wendy; McIntyre, Jordan P R; Gunn, Alistair J; Lear, Christopher A; Bennet, Laura; Mitchell, Edwin A; Thompson, John M D
2017-02-15
Fetal behavioural state in healthy late gestation pregnancy is affected by maternal position. Fetal state 1F is more likely to occur in maternal supine or right lateral positions. Fetal state 4F is less likely to occur when the woman lies supine or semi-recumbent. Fetal state change is more likely when the woman is supine or semi-recumbent. Fetal heart rate variability is affected by maternal position with variability reduced in supine and semi-recumbent positions. Fetal behavioural states (FBS) are measures of fetal wellbeing. In acute hypoxaemia, the human fetus adapts to a lower oxygen consuming state with changes in the cardiotocograph and reduced fetal activity. Recent studies of late gestation stillbirth described the importance of sleep position in the risk of intrauterine death. We designed this study to assess the effects of different maternal positions on FBS in healthy late gestation pregnancies under controlled conditions. Twenty-nine healthy women had continuous fetal ECG recordings under standardized conditions in four randomly allocated positions, left lateral, right lateral, supine and semi-recumbent. Two blinded observers, assigned fetal states in 5 min blocks. Measures of fetal heart rate variability were calculated from ECG beat to beat data. Compared to state 2F, state 4F was less likely to occur when women were semi-recumbent [odds ratio (OR) = 0.11, 95% confidence interval (95% CI) 0.02, 0.55], and supine (OR = 0.27, 95% CI 0.07, 1.10). State 1F was more likely on the right (OR = 2.36, 95% CI 1.11, 5.04) or supine (OR = 4.99, 95% CI 2.41, 10.43) compared to the left. State change was more likely when the mother was semi-recumbent (OR = 2.17, 95% CI 1.19, 3.95) or supine (OR = 2.67, 95% CI 1.46, 4.85). There was a significant association of maternal position to mean fetal heart rate. The measures of heart rate variability (SDNN and RMSSD) were reduced in both semi-recumbent and supine positions. In healthy late gestation pregnancy, maternal position affects FBS and heart rate variability. These effects are likely fetal adaptations to positions which may produce a mild hypoxic stress. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Sullivan, Katherine J; Knowlton, Barbara J; Dobkin, Bruce H
2002-05-01
To investigate the effect of practice paradigms that varied treadmill speed during step training with body weight support in subjects with chronic hemiparesis after stroke. Randomized, repeated-measures pilot study with 1- and 3-month follow-ups. Outpatient locomotor laboratory. Twenty-four individuals with hemiparetic gait deficits whose walking speeds were at least 50% below normal. Participants were stratified by locomotor severity based on initial walking velocity and randomly assigned to treadmill training at slow (0.5mph), fast (2.0mph), or variable (0.5, 1.0, 1.5, 2.0mph) speeds. Participants received 20 minutes of training per session for 12 sessions over 4 weeks. Self-selected overground walking velocity (SSV) was assessed at the onset, middle, and end of training, and 1 and 3 months later. SSV improved in all groups compared with baseline (P<.001). All groups increased SSV in the 1-month follow-up (P<.01) and maintained these gains at the 3-month follow-up (P=.77). The greatest improvement in SSV across training occurred with fast training speeds compared with the slow and variable groups combined (P=.04). Effect size (ES) was large between fast compared with slow (ES=.75) and variable groups (ES=.73). Training at speeds comparable with normal walking velocity was more effective in improving SSV than training at speeds at or below the patient's typical overground walking velocity. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation
Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection
NASA Astrophysics Data System (ADS)
Denuit, Michel; Dhaene, Jan
2007-06-01
In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.
Axelrod, David M.; Sutherland, Scott M.; Anglemyer, Andrew; Grimm, Paul C.; Roth, Stephen J.
2015-01-01
Objective Acute kidney injury (AKI) occurs commonly in children following congenital cardiac surgery with cardiopulmonary bypass (CPB) and has been associated with increased morbidity and mortality. Aminophylline, a methylxanthine nonselective adenosine receptor antagonist, has been effective in the management of AKI in certain populations. This study sought to determine if post-operative administration of aminophylline attenuates AKI in children undergoing congenital cardiac surgery with CPB. Design Single-center, double-blinded, placebo-controlled, randomized clinical trial (RCT). Setting Tertiary center, pediatric cardiovascular intensive care unit. Patients 144 children after congenital heart surgery with CPB. Interventions Seventy-two patients were randomized to receive aminophylline and 72 patients received placebo. Study drug was administered every six hours for 72 hours. Measurements and Main Results The primary outcome variable was development of any AKI, defined by the serum creatinine criteria of the Kidney Diseases: Improving Global Outcomes (KDIGO) criteria. Secondary outcomes included the development of severe AKI, time between CVICU admission and first successful extubation, percent fluid overload, total fluid balance, urine output, bioelectrical impedance, and serum neutrophil gelatinase-associated lipocalin (NGAL). The unadjusted rate and severity of AKI were not different between groups; 43/72 (60%) of the treatment group and 36/72 (50%) of the placebo group developed AKI (p=0.32). Stage 2/3 AKI occurred in 23/72 (32%) of the treatment group and 15/72 (21%) of the placebo group (p=0.18). Secondary outcome measures also demonstrated no significant difference between treatment and placebo groups. Aminophylline administration was safe; no deaths occurred in either group, and rates of adverse events were similar (14% in the treatment group versus 18% in the placebo group, p =0.30). Conclusions In this placebo-controlled RCT, we found no effect of aminophylline to prevent AKI in children recovering from cardiac surgery performed with CPB. Future study of pre-operative aminophylline administration to prevent AKI may be warranted. PMID:26669642
Interannual variability of planet-encircling dust storms on Mars
NASA Technical Reports Server (NTRS)
Zurek, Richard W.; Martin, Leonard J.
1993-01-01
A recent review of earth-based telescopic observations of Mars together with Viking orbiter and lander data are employed to estimate the frequency of occurrence of planet-encircling dust storms over the past century and to test whether the period spanned by the Mariner 9 and Viking missions to Mars is representative of the decades prior to 1950. Both spacecraft and earth-based observations suggest that planet-encircling dust storms on Mars occur during a 'dust storm season' in southern spring and summer. Viking data show that planet-encircling dust storms could have occurred in the past on Mars without being detected from earth during years in which Mars was far from earth during the dust storm season. Planet-encircling storms were absent during the dust storm seasons monitored during several favorable oppositions prior to 1956 and after 1986. The change of a planet-encircling dust storm occurring in any arbitrary Mars year is estimated to be approximately one in three, if this occurrence is random from year to year and yet restricted seasonally to southern spring and summer.
Provine, Robert R.; Emmorey, Karen
2008-01-01
The placement of laughter in the speech of hearing individuals is not random but “punctuates” speech, occurring during pauses and at phrase boundaries where punctuation would be placed in a transcript of a conversation. For speakers, language is dominant in the competition for the vocal tract since laughter seldom interrupts spoken phrases. For users of American Sign Language, however, laughter and language do not compete in the same way for a single output channel. This study investigated whether laughter occurs simultaneously with signing, or punctuates signing, as it does speech, in 11 signed conversations (with two to five participants) that had at least one instance of audible, vocal laughter. Laughter occurred 2.7 times more often during pauses and at phrase boundaries than simultaneously with a signed utterance. Thus, the production of laughter involves higher order cognitive or linguistic processes rather than the low-level regulation of motor processes competing for a single vocal channel. In an examination of other variables, the social dynamics of deaf and hearing people were similar, with “speakers” (those signing) laughing more than their audiences and females laughing more than males. PMID:16891353
Provine, Robert R; Emmorey, Karen
2006-01-01
The placement of laughter in the speech of hearing individuals is not random but "punctuates" speech, occurring during pauses and at phrase boundaries where punctuation would be placed in a transcript of a conversation. For speakers, language is dominant in the competition for the vocal tract since laughter seldom interrupts spoken phrases. For users of American Sign Language, however, laughter and language do not compete in the same way for a single output channel. This study investigated whether laughter occurs simultaneously with signing, or punctuates signing, as it does speech, in 11 signed conversations (with two to five participants) that had at least one instance of audible, vocal laughter. Laughter occurred 2.7 times more often during pauses and at phrase boundaries than simultaneously with a signed utterance. Thus, the production of laughter involves higher order cognitive or linguistic processes rather than the low-level regulation of motor processes competing for a single vocal channel. In an examination of other variables, the social dynamics of deaf and hearing people were similar, with "speakers" (those signing) laughing more than their audiences and females laughing more than males.
NASA Astrophysics Data System (ADS)
Indarsih, Indrati, Ch. Rini
2016-02-01
In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.
Exploiting Data Missingness in Bayesian Network Modeling
NASA Astrophysics Data System (ADS)
Rodrigues de Morais, Sérgio; Aussem, Alex
This paper proposes a framework built on the use of Bayesian networks (BN) for representing statistical dependencies between the existing random variables and additional dummy boolean variables, which represent the presence/absence of the respective random variable value. We show how augmenting the BN with these additional variables helps pinpoint the mechanism through which missing data contributes to the classification task. The missing data mechanism is thus explicitly taken into account to predict the class variable using the data at hand. Extensive experiments on synthetic and real-world incomplete data sets reveals that the missingness information improves classification accuracy.
The Bayesian group lasso for confounded spatial data
Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.
2017-01-01
Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.
Nonrecurrence and Bell-like inequalities
NASA Astrophysics Data System (ADS)
Danforth, Douglas G.
2017-12-01
The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.
Perturbed effects at radiation physics
NASA Astrophysics Data System (ADS)
Külahcı, Fatih; Şen, Zekâi
2013-09-01
Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.
Adaptive random walks on the class of Web graphs
NASA Astrophysics Data System (ADS)
Tadić, B.
2001-09-01
We study random walk with adaptive move strategies on a class of directed graphs with variable wiring diagram. The graphs are grown from the evolution rules compatible with the dynamics of the world-wide Web [B. Tadić, Physica A 293, 273 (2001)], and are characterized by a pair of power-law distributions of out- and in-degree for each value of the parameter β, which measures the degree of rewiring in the graph. The walker adapts its move strategy according to locally available information both on out-degree of the visited node and in-degree of target node. A standard random walk, on the other hand, uses the out-degree only. We compute the distribution of connected subgraphs visited by an ensemble of walkers, the average access time and survival probability of the walks. We discuss these properties of the walk dynamics relative to the changes in the global graph structure when the control parameter β is varied. For β≥ 3, corresponding to the world-wide Web, the access time of the walk to a given level of hierarchy on the graph is much shorter compared to the standard random walk on the same graph. By reducing the amount of rewiring towards rigidity limit β↦βc≲ 0.1, corresponding to the range of naturally occurring biochemical networks, the survival probability of adaptive and standard random walk become increasingly similar. The adaptive random walk can be used as an efficient message-passing algorithm on this class of graphs for large degree of rewiring.
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Bruce T.
2015-12-11
Problem: The overall goal of this proposal is to detect observed seasonal-mean precipitation variations and extreme event occurrences over the United States. Detection, e.g. the process of demonstrating that an observed change in climate is unusual, first requires some means of estimating the range of internal variability absent any external drivers. Ideally, the internal variability would be derived from the observations themselves, however generally the observed variability is a confluence of both internal variability and variability in response to external drivers. Further, numerical climate models—the standard tool for detection studies—have their own estimates of intrinsic variability, which may differ substantiallymore » from that found in the observed system as well as other model systems. These problems are further compounded for weather and climate extremes, which as singular events are particularly ill-suited for detection studies because of their infrequent occurrence, limited spatial range, and underestimation within global and even regional numerical models. Rationale: As a basis for this research we will show how stochastic daily-precipitation models—models in which the simulated interannual-to-multidecadal precipitation variance is purely the result of the random evolution of daily precipitation events within a given time period—can be used to address many of these issues simultaneously. Through the novel application of these well-established models, we can first estimate the changes/trends in various means and extremes that can occur even with fixed daily-precipitation characteristics, e.g. that can occur simply as a result of the stochastic evolution of daily weather events within a given climate. Detection of a change in the observed climate—either naturally or anthropogenically forced—can then be defined as any change relative to this stochastic variability, e.g. as changes/trends in the means and extremes that could only have occurred through a change in the underlying climate. As such, this method is capable of detecting “hot spot” regions—as well as “flare ups” within the hot spot regions—that have experienced interannual to multi-decadal scale variations and trends in seasonal-mean precipitation and extreme events. Further by applying the same methods to numerical climate models we can discern the fidelity of the current-generation climate models in representing detectability within the observed climate system. In this way, we can objectively determine the utility of these model systems for performing detection studies of historical and future climate change.« less
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
The living Drake equation of the Tau Zero Foundation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-03-01
The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.
A Unifying Probability Example.
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.
2002-01-01
Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…
Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
Correlated resistive/capacitive state variability in solid TiO2 based memory devices
NASA Astrophysics Data System (ADS)
Li, Qingjiang; Salaoru, Iulia; Khiat, Ali; Xu, Hui; Prodromakis, Themistoklis
2017-05-01
In this work, we experimentally demonstrated the correlated resistive/capacitive switching and state variability in practical TiO2 based memory devices. Based on filamentary functional mechanism, we argue that the impedance state variability stems from the randomly distributed defects inside the oxide bulk. Finally, our assumption was verified via a current percolation circuit model, by taking into account of random defects distribution and coexistence of memristor and memcapacitor.
Algebraic Functions of H-Functions with Specific Dependency Structure.
1984-05-01
a study of its characteristic function. Such analysis is reproduced in books by Springer (17), Anderson (23), Feller (34,35), Mood and Graybill (52...following linearity property for expectations of jointly distributed random variables is derived. r 1 Theorem 1.1: If X and Y are real random variables...appear in American Journal of Mathematical and Management Science. 13. Mathai, A.M., and R.K. Saxena, "On linear combinations of stochastic variables
On the fluctuations of sums of independent random variables.
Feller, W
1969-07-01
If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.
Bacterial Presence in Layered Rock Varnish-Possible Mars Analog?
NASA Astrophysics Data System (ADS)
Krinsley, D.; Rusk, B. G.
2000-08-01
Rock varnish from locations in Death Valley, California; Peru; Antarctica; and Hawaii reveal nanometer scale layering (less than 1 nm to about 75 nm) when studied with transmission electron microscopy (TEM). Parallel layers of clay minerals containing evidence of presumed bacteria were present in all samples. Samples range in age from a few thousand years to perhaps a million years. Diagenesis is relatively limited, as chemical composition is variable, both from top to bottom and along layers in these varnish samples. Also, occasional exotic minerals occur randomly in most varnish sections, and vary in size and hardness, again suggesting relative lack of diagenetic alteration. Additional information can be found in the original extended abstract.
Association between family variables and Mexican American children's dietary behaviors.
Ayala, Guadalupe X; Baquero, Barbara; Arredondo, Elva M; Campbell, Nadia; Larios, Sandra; Elder, John P
2007-01-01
To examine the association between family variables and children's diets. Cross-sectional study with households sampled using random-digit dialing. Children completed a one-time, self-administered survey, and mothers participated in a face-to-face structured interview. Data collection occurred in southern San Diego County on the U.S.-Mexico border. One hundred sixty-seven Mexican American children between 8 and 18 years of age and their mothers. Diet: number of snacks, candies and sweets, and sodas consumed daily; dietary fat and fiber; and money spent weekly on fast food and snacks. Family variables: household size, family support for healthful eating, number of meals eaten together, availability of fast food in the home, food ads seen on television, and parent purchasing food products that children saw advertised on television. Regression analyses were used to examine the independent contributions of family variables on dietary intake. Statistical significance was set at P < .05. Greater family support for healthful eating was associated with fewer snacks and more fiber consumed. Children of parents who purchased food products that their children had seen advertised on television reported consuming more snacks and more fat, and they spent more money on fast food and snacks. Family-based interventions are needed to moderate the potential influence of television-advertised food products on children's requests for these food products.
Variability of multilevel switching in scaled hybrid RS/CMOS nanoelectronic circuits: theory
NASA Astrophysics Data System (ADS)
Heittmann, Arne; Noll, Tobias G.
2013-07-01
A theory is presented which describes the variability of multilevel switching in scaled hybrid resistive-switching/CMOS nanoelectronic circuits. Variability is quantified in terms of conductance variation using the first two moments derived from the probability density function (PDF) of the RS conductance. For RS, which are based on the electrochemical metallization effect (ECM), this variability is - to some extent - caused by discrete events such as electrochemical reactions, which occur on atomic scale and are at random. The theory shows that the conductance variation depends on the joint interaction between the programming circuit and the resistive switch (RS), and explicitly quantifies the impact of RS device parameters and parameters of the programming circuit on the conductance variance. Using a current mirror as an exemplary programming circuit an upper limit of 2-4 bits (dependent on the filament surface area) is estimated as the storage capacity exploiting the multilevel capabilities of an ECM cell. The theoretical results were verified by Monte Carlo circuit simulations on a standard circuit simulation environment using an ECM device model which models the filament growth by a Poisson process. Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.
Sharma, Namrata; Goel, Manik; Bansal, Shubha; Agarwal, Prakashchand; Titiyal, Jeewan S; Upadhyaya, Ashish D; Vajpayee, Rasik B
2013-06-01
To compare the equivalence of moxifloxacin 0.5% with a combination of fortified cefazolin sodium 5% and tobramycin sulfate 1.3% eye drops in the treatment of moderate bacterial corneal ulcers. Randomized, controlled, equivalence clinical trial. Microbiologically proven cases of bacterial corneal ulcers were enrolled in the study and were allocated randomly to 1 of the 2 treatment groups. Group A was given combination therapy (fortified cefazolin sodium 5% and tobramycin sulfate) and group B was given monotherapy (moxifloxacin 0.5%). The primary outcome variable for the study was percentage of the ulcers healed at 3 months. The secondary outcome variables were best-corrected visual acuity and resolution of infiltrates. Of a total of 224 patients with bacterial keratitis, 114 patients were randomized to group A, whereas 110 patients were randomized to group B. The mean ± standard deviation ulcer size in groups A and B were 4.2 ± 2 and 4.41 ± 1.5 mm, respectively. The prevalence of coagulase-negative Staphylococcus (40.9% in group A and 48.2% in group B) was similar in both the study groups. A complete resolution of keratitis and healing of ulcers occurred in 90 patients (81.8%) in group A and 88 patients (81.4%) in group B at 3 months. The observed percentage of healing at 3 months was less than the equivalence margin of 20%. Worsening of ulcer was seen in 18.2% cases in group A and in 18.5% cases in group B. Mean time to epithelialization was similar, and there was no significant difference in the 2 groups (P = 0.065). No serious events attributable to therapy were reported. Corneal healing using 0.5% moxifloxacin monotherapy is equivalent to that of combination therapy using fortified cefazolin and tobramycin in the treatment of moderate bacterial corneal ulcers. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
On the distribution of a product of N Gaussian random variables
NASA Astrophysics Data System (ADS)
Stojanac, Željka; Suess, Daniel; Kliesch, Martin
2017-08-01
The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.
Austin, Peter C
2014-03-30
Propensity score methods are increasingly being used to estimate causal treatment effects in observational studies. In medical and epidemiological studies, outcomes are frequently time-to-event in nature. Propensity-score methods are often applied incorrectly when estimating the effect of treatment on time-to-event outcomes. This article describes how two different propensity score methods (matching and inverse probability of treatment weighting) can be used to estimate the measures of effect that are frequently reported in randomized controlled trials: (i) marginal survival curves, which describe survival in the population if all subjects were treated or if all subjects were untreated; and (ii) marginal hazard ratios. The use of these propensity score methods allows one to replicate the measures of effect that are commonly reported in randomized controlled trials with time-to-event outcomes: both absolute and relative reductions in the probability of an event occurring can be determined. We also provide guidance on variable selection for the propensity score model, highlight methods for assessing the balance of baseline covariates between treated and untreated subjects, and describe the implementation of a sensitivity analysis to assess the effect of unmeasured confounding variables on the estimated treatment effect when outcomes are time-to-event in nature. The methods in the paper are illustrated by estimating the effect of discharge statin prescribing on the risk of death in a sample of patients hospitalized with acute myocardial infarction. In this tutorial article, we describe and illustrate all the steps necessary to conduct a comprehensive analysis of the effect of treatment on time-to-event outcomes. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
Random parameter models for accident prediction on two-lane undivided highways in India.
Dinu, R R; Veeraragavan, A
2011-02-01
Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.
Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!
Vetter, Thomas R; Mascha, Edward J
2017-09-01
Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.
Random dopant fluctuations and statistical variability in n-channel junctionless FETs
NASA Astrophysics Data System (ADS)
Akhavan, N. D.; Umana-Membreno, G. A.; Gu, R.; Antoszewski, J.; Faraone, L.
2018-01-01
The influence of random dopant fluctuations on the statistical variability of the electrical characteristics of n-channel silicon junctionless nanowire transistor (JNT) has been studied using three dimensional quantum simulations based on the non-equilibrium Green’s function (NEGF) formalism. Average randomly distributed body doping densities of 2 × 1019, 6 × 1019 and 1 × 1020 cm-3 have been considered employing an atomistic model for JNTs with gate lengths of 5, 10 and 15 nm. We demonstrate that by properly adjusting the doping density in the JNT, a near ideal statistical variability and electrical performance can be achieved, which can pave the way for the continuation of scaling in silicon CMOS technology.
2015-04-15
manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original... manage , predict and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved situation faced
Quantifying the abundance of co-occurring conifers along Inland Northwest (USA) climate gradients.
Rehfeldt, Gerald E; Ferguson, Dennis E; Crookston, Nicholas L
2008-08-01
The occurrence and abundance of conifers along climate gradients in the Inland Northwest (USA) was assessed using data from 5082 field plots, 81% of which were forested. Analyses using the Random Forests classification tree revealed that the sequential distribution of species along an altitudinal gradient could be predicted with reasonable accuracy from a single climate variable, a growing-season dryness index, calculated from the ratio of degree-days >5 degrees C that accumulate in the frost-free season to the summer precipitation. While the appearance and departure of species in an ascending altitudinal sequence were closely related to the dryness index, the departure was most easily visualized in relation to negative degree-days (degree-days < 0 degrees C). The results were in close agreement with the works of descriptive ecologists. A Weibull response function was used to predict from climate variables the abundance and occurrence probabilities of each species, using binned data. The fit of the models was excellent, generally accounting for >90% of the variance among 100 classes.
Safety activities in small businesses
Sinclair, Raymond C.; Cunningham, Thomas R.
2015-01-01
Background Workplace injuries occur at higher rates in smaller firms than in larger firms, and the number of workplace safety activities appear to be inversely associated with those rates. Predictors of safety activities are rarely studied. Methods This study uses data from a national random survey of firms (n = 722) with less than 250 employees conducted in 2002. Results We found that, regardless of firm size or industry, safety activities were more common in 2002 than they were in a similar 1983 study. Having had an OSHA inspection in the last five years and firm size were stronger predictors of safety activities than industry hazardousness and manager’s perceptions of hazardousness. All four variables were significant predictors (β range .19 to .28; R2 = .27). Conclusions Further progress in the prevention of injuries in small firms will require attention to factors likely subsumed within the firm size variable, especially the relative lack of slack resources that might be devoted to safety activities. PMID:26339124
Sums and Products of Jointly Distributed Random Variables: A Simplified Approach
ERIC Educational Resources Information Center
Stein, Sheldon H.
2005-01-01
Three basic theorems concerning expected values and variances of sums and products of random variables play an important role in mathematical statistics and its applications in education, business, the social sciences, and the natural sciences. A solid understanding of these theorems requires that students be familiar with the proofs of these…
A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.
ERIC Educational Resources Information Center
Kraemer, Helena Chmura; Thiemann, Sue
1989-01-01
Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…
Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients
ERIC Educational Resources Information Center
Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako
2012-01-01
Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…
Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2005-11-01
We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.
NASA Astrophysics Data System (ADS)
Graham, Wendy D.; Tankersley, Claude D.
1994-05-01
Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.
Random Effects: Variance Is the Spice of Life.
Jupiter, Daniel C
Covariates in regression analyses allow us to understand how independent variables of interest impact our dependent outcome variable. Often, we consider fixed effects covariates (e.g., gender or diabetes status) for which we examine subjects at each value of the covariate. We examine both men and women and, within each gender, examine both diabetic and nondiabetic patients. Occasionally, however, we consider random effects covariates for which we do not examine subjects at every value. For example, we examine patients from only a sample of hospitals and, within each hospital, examine both diabetic and nondiabetic patients. The random sampling of hospitals is in contrast to the complete coverage of all genders. In this column I explore the differences in meaning and analysis when thinking about fixed and random effects variables. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
McDonell, Michael G.; Leickly, Emily; McPherson, Sterling; Skalisky, Jordan; Srebnik, Debra; Angelo, Frank; Vilardaga, Roger; Nepom, Jenny R.; Roll, John M.; Ries, Richard K.
2017-01-01
Objective To determine if a contingency management intervention using the ethyl glucuronide (EtG) alcohol biomarker resulted in increased alcohol abstinence in outpatients with co-occurring serious mental illnesses. Secondary objectives were to determine if contingency management was associated with changes in heavy drinking, treatment attendance, drug use, cigarette smoking, psychiatric symptoms, and HIV-risk behavior. Method Seventy-nine (37% female, 44% non-white) outpatients with serious mental illness and alcohol dependence receiving treatment as usual completed a 4-week observation period and were randomized to 12-weeks of contingency management for EtG-negative urine samples and addiction treatment attendance, or reinforcement only for study participation. Contingency management included the variable magnitude of reinforcement “prize draw” procedure contingent on EtG-negative samples (<150 ng/mL) three times a week and weekly gift cards for outpatient treatment attendance. Urine EtG, drug test, and self-report outcomes were assessed during the 12-week intervention and 3-month follow-up periods. Results Contingency management participants were 3.1 times (95% CI: 2.2, 4.5) more likely to submit an EtG-negative urine test during the 12-week intervention period, attaining nearly 1.5 weeks of additional abstinence relative to controls. Contingency management participants had significantly lower mean EtG levels, reported less drinking and fewer heavy drinking episodes, and were more likely to submit stimulant-negative urine and smoking-negative breath samples, relative to controls. Differences in self-reported alcohol use were maintained at the 3-month follow-up. Conclusions This is the first randomized trial utilizing an accurate and validated biomarker (EtG) to demonstrate the efficacy of contingency management for alcohol dependence in outpatients with serious mental illness. PMID:28135843
McDonell, Michael G; Leickly, Emily; McPherson, Sterling; Skalisky, Jordan; Srebnik, Debra; Angelo, Frank; Vilardaga, Roger; Nepom, Jenny R; Roll, John M; Ries, Richard K
2017-04-01
The authors examined whether a contingency management intervention using the ethyl glucuronide (EtG) alcohol biomarker resulted in increased alcohol abstinence in outpatients with co-occurring serious mental illnesses. Secondary objectives were to determine whether contingency management was associated with changes in heavy drinking, treatment attendance, drug use, cigarette smoking, psychiatric symptoms, and HIV-risk behavior. Seventy-nine (37% female, 44% nonwhite) outpatients with serious mental illness and alcohol dependence receiving treatment as usual completed a 4-week observation period and were randomly assigned to 12 weeks of contingency management for EtG-negative urine samples and addiction treatment attendance, or reinforcement only for study participation. Contingency management included the variable magnitude of reinforcement "prize draw" procedure contingent on EtG-negative samples (<150 ng/mL) three times a week and weekly gift cards for outpatient treatment attendance. Urine EtG, drug test, and self-report outcomes were assessed during the 12-week intervention and 3-month follow-up periods. Contingency management participants were 3.1 times (95% CI=2.2-4.5) more likely to submit an EtG-negative urine test during the 12-week intervention period, attaining nearly 1.5 weeks of additional alcohol abstinence compared with controls. Contingency management participants had significantly lower mean EtG levels, reported less drinking and fewer heavy drinking episodes, and were more likely to submit stimulant-negative urine and smoking-negative breath samples, compared with controls. Differences in self-reported alcohol use were maintained at the 3-month follow-up. This is the first randomized trial utilizing an accurate and validated biomarker (EtG) to demonstrate the efficacy of contingency management for alcohol dependence in outpatients with serious mental illness.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.
2015-10-01
The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.
Image discrimination models predict detection in fixed but not random noise
NASA Technical Reports Server (NTRS)
Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)
1997-01-01
By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.
CSI 2264: CHARACTERIZING YOUNG STARS IN NGC 2264 WITH STOCHASTICALLY VARYING LIGHT CURVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stauffer, John; Rebull, Luisa; Carey, Sean
2016-03-15
We provide CoRoT and Spitzer light curves and other supporting data for 17 classical T Tauri stars in NGC 2264 whose CoRoT light curves exemplify the “stochastic” light curve class as defined in 2014 by Cody et al. The most probable physical mechanism to explain the optical variability within this light curve class is time-dependent mass accretion onto the stellar photosphere, producing transient hot spots. Where we have appropriate spectral data, we show that the veiling variability in these stars is consistent in both amplitude and timescale with the optical light curve morphology. The veiling variability is also well-correlated with the strengthmore » of the He i 6678 Å emission line, predicted by models to arise in accretion shocks on or near the stellar photosphere. Stars with accretion burst light curve morphology also have variable mass accretion. The stochastic and accretion burst light curves can both be explained by a simple model of randomly occurring flux bursts, with the stochastic light curve class having a higher frequency of lower amplitude events. Members of the stochastic light curve class have only moderate mass accretion rates. Their Hα profiles usually have blueshifted absorption features, probably originating in a disk wind. The lack of periodic signatures in the light curves suggests that little of the variability is due to long-lived hot spots rotating into or out of our line of sight; instead, the primary driver of the observed photometric variability is likely to be instabilities in the inner disk that lead to variable mass accretion.« less
A bivariate model for analyzing recurrent multi-type automobile failures
NASA Astrophysics Data System (ADS)
Sunethra, A. A.; Sooriyarachchi, M. R.
2017-09-01
The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by the bivariate model. The proposed model can be used to determine the time and type of failure that would occur in the automobiles considered here.
Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-01-01
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914
Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-03-15
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.
Ji, Peter; DuBois, David L; Flay, Brian R; Brechling, Vanessa
2008-03-01
Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and match/randomize), we would recruit schools and match them prior to randomization, and in the other (match/randomize and recruitment), we would match schools and randomize them prior to recruitment. We considered how each procedure impacted the randomization process and our ability to recruit schools into the study. After implementing the selected procedure, the equivalence of both treatment and control group schools and the participating and nonparticipating schools on school demographic variables was evaluated. We decided on the recruit and match/randomize procedure because we thought it would provide the opportunity to build rapport with the schools and prepare them for the randomization process, thereby increasing the likelihood that they would accept their randomly assigned conditions. Neither the treatment and control group schools nor the participating and nonparticipating schools exhibited statistically significant differences from each other on any of the school demographic variables. Recruitment of schools prior to matching and randomization in an MP-RCT may facilitate the recruitment of schools and thus enhance both the statistical power and the representativeness of study findings. Future research would benefit from the consideration of a broader range of variables (eg, readiness to implement a comprehensive prevention program) both in matching schools and in evaluating their representativeness to nonparticipating schools.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
The Stochastic Modelling of Endemic Diseases
NASA Astrophysics Data System (ADS)
Susvitasari, Kurnia; Siswantining, Titin
2017-01-01
A study about epidemic has been conducted since a long time ago, but genuine progress was hardly forthcoming until the end of the 19th century (Bailey, 1975). Both deterministic and stochastic models were used to describe these. Then, from 1927 to 1939 Kermack and McKendrick introduced a generality of this model, including some variables to consider such as rate of infection and recovery. The purpose of this project is to investigate the behaviour of the models when we set the basic reproduction number, R0. This quantity is defined as the expected number of contacts made by a typical infective to susceptibles in the population. According to the epidemic threshold theory, when R0 ≤ 1, minor epidemic occurs with probability one in both approaches, but when R0 > 1, the deterministic and stochastic models have different interpretation. In the deterministic approach, major epidemic occurs with probability one when R0 > 1 and predicts that the disease will settle down to an endemic equilibrium. Stochastic models, on the other hand, identify that the minor epidemic can possibly occur. If it does, then the epidemic will die out quickly. Moreover, if we let the population size be large and the major epidemic occurs, then it will take off and then reach the endemic level and move randomly around the deterministic’s equilibrium.
Slagter, Kirsten W; Meijer, Henny J A; Bakker, Nicolaas A; Vissink, Arjan; Raghoebar, Gerry M
2016-06-01
This study aims to assess, with regard to marginal bone level (MBL), whether the outcome of immediate implant placement in bony defects in the esthetic zone was non-inferior to delayed implant placement after 1 year. Forty patients with a failing tooth in the esthetic zone and a labial bony defect of ≥5 mm after removal of a tooth were randomly assigned for immediate (n = 20) or delayed (n = 20) implant placement. Second-stage surgery and provisionalization occurred after 3 months of healing. Follow-up was at 1 month and 1 year after definitive crown placement. The study was powered to detect a difference in MBL of >0.9 mm. Buccal bone thickness, soft tissue peri-implant parameters, esthetic indices, and patient satisfaction were also assessed. One year after definitive crown placement, MBL loss was 0.56 ± 0.39 mm mesially and 0.74 ± 0.51 mm distally for the immediate placement group and 0.51 ± 0.43 mesially and 0.54 ± 0.45 distally mm for the delayed placement group, respectively (not significant). Regarding differences in means, non-inferiority was observed after 1 year (difference in mean for immediate versus delayed: mesially 0.04 mm [95% confidence interval (CI) = -0.22 to 0.30 mm, P = 0.40]; distally 0.21 mm [95% CI = -0.10 to 0.51 mm, P = 0.58]). No significant differences in the other outcome variables were observed. Immediate implant placement with delayed provisionalization was non-inferior to delayed implant placement with delayed provisionalization in labial bony defects of ≥5 mm regarding change in MBL. Although not powered for other outcome variables, no clinically relevant differences were observed in these variables.
Randomized double-blind comparison of cognitive and EEG effects of lacosamide and carbamazepine.
Meador, Kimford J; Loring, David W; Boyd, Alan; Echauz, Javier; LaRoche, Suzette; Velez-Ruiz, Naymee; Korb, Pearce; Byrnes, William; Dilley, Deanne; Borghs, Simon; De Backer, Marc; Story, Tyler; Dedeken, Peter; Webster, Elizabeth
2016-09-01
Differential effectiveness of antiepileptic drugs (AEDs) is more commonly determined by tolerability than efficacy. Cognitive effects of AEDs can adversely affect tolerability and quality of life. This study evaluated cognitive and EEG effects of lacosamide (LCM) compared with carbamazepine immediate-release (CBZ-IR). A randomized, double-blind, double-dummy, two-period crossover, fixed-dose study in healthy subjects compared neuropsychological and EEG effects of LCM (150mg, b.i.d.) and CBZ-IR (200mg, t.i.d.). Testing was conducted at screening, predrug baseline, the end of each treatment period (3-week titration; 3-week maintenance), and the end of each washout period (4weeks after treatment). A composite Z-score was derived for the primary outcome variable (computerized cognitive tests and traditional neuropsychological measures) and separately for the EEG measures. Other variables included individual computer, neuropsychological, and EEG scores and adverse events (AEs). Subjects included 60 healthy adults (57% female; mean age: 34.4years [SD: 10.5]); 44 completed both treatments; 41 were per protocol subjects. Carbamazepine immediate-release had worse scores compared with LCM for the primary composite neuropsychological outcome (mean difference=0.33 [SD: 1.36], p=0.011) and for the composite EEG score (mean difference=0.92 [SD: 1.77], p=0.003). Secondary analyses across the individual variables revealed that CBZ-IR was statistically worse than LCM on 36% (4/11) of the neuropsychological tests (computerized and noncomputerized) and 0% of the four EEG measures; none favored CBZ-IR. Drug-related AEs occurred more with CBZ-IR (49%) than LCM (22%). Lacosamide had fewer untoward neuropsychological and EEG effects and fewer AEs and AE-related discontinuations than CBZ-IR in healthy subjects. Lacosamide exhibits a favorable cognitive profile. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Meiling; Liu, Xiangnan; Li, Jin; Ding, Chao; Jiang, Jiale
2014-12-01
Satellites routinely provide frequent, large-scale, near-surface views of many oceanographic variables pertinent to plankton ecology. However, the nutrient fertility of water can be challenging to detect accurately using remote sensing technology. This research has explored an approach to estimate the nutrient fertility in coastal waters through the fusion of synthetic aperture radar (SAR) images and optical images using the random forest (RF) algorithm. The estimation of total inorganic nitrogen (TIN) in the Hong Kong Sea, China, was used as a case study. In March of 2009 and May and August of 2010, a sequence of multi-temporal in situ data and CCD images from China's HJ-1 satellite and RADARSAT-2 images were acquired. Four sensitive parameters were selected as input variables to evaluate TIN: single-band reflectance, a normalized difference spectral index (NDSI) and HV and VH polarizations. The RF algorithm was used to merge the different input variables from the SAR and optical imagery to generate a new dataset (i.e., the TIN outputs). The results showed the temporal-spatial distribution of TIN. The TIN values decreased from coastal waters to the open water areas, and TIN values in the northeast area were higher than those found in the southwest region of the study area. The maximum TIN values occurred in May. Additionally, the estimation accuracy for estimating TIN was significantly improved when the SAR and optical data were used in combination rather than a single data type alone. This study suggests that this method of estimating nutrient fertility in coastal waters by effectively fusing data from multiple sensors is very promising.
Wrong-way driving crashes: A random-parameters ordered probit analysis of injury severity.
Jalayer, Mohammad; Shabanpour, Ramin; Pour-Rouholamin, Mahdi; Golshani, Nima; Zhou, Huaguo
2018-04-23
In the context of traffic safety, whenever a motorized road user moves against the proper flow of vehicle movement on physically divided highways or access ramps, this is referred to as wrong-way driving (WWD). WWD is notorious for its severity rather than frequency. Based on data from the U.S. National Highway Traffic Safety Administration, an average of 355 deaths occur in the U.S. each year due to WWD. This total translates to 1.34 fatalities per fatal WWD crashes, whereas the same rate for other crash types is 1.10. Given these sobering statistics, WWD crashes, and specifically their severity, must be meticulously analyzed using the appropriate tools to develop sound and effective countermeasures. The objectives of this study were to use a random-parameters ordered probit model to determine the features that best describe WWD crashes and to evaluate the severity of injuries in WWD crashes. This approach takes into account unobserved effects that may be associated with roadway, environmental, vehicle, crash, and driver characteristics. To that end and given the rareness of WWD events, 15 years of crash data from the states of Alabama and Illinois were obtained and compiled. Based on this data, a series of contributing factors including responsible driver characteristics, temporal variables, vehicle characteristics, and crash variables are determined, and their impacts on the severity of injuries are explored. An elasticity analysis was also performed to accurately quantify the effect of significant variables on injury severity outcomes. According to the obtained results, factors such as driver age, driver condition, roadway surface conditions, and lighting conditions significantly contribute to the injury severity of WWD crashes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Variable order fractional Fokker-Planck equations derived from Continuous Time Random Walks
NASA Astrophysics Data System (ADS)
Straka, Peter
2018-08-01
Continuous Time Random Walk models (CTRW) of anomalous diffusion are studied, where the anomalous exponent β(x) ∈(0 , 1) varies in space. This type of situation occurs e.g. in biophysics, where the density of the intracellular matrix varies throughout a cell. Scaling limits of CTRWs are known to have probability distributions which solve fractional Fokker-Planck type equations (FFPE). This correspondence between stochastic processes and FFPE solutions has many useful extensions e.g. to nonlinear particle interactions and reactions, but has not yet been sufficiently developed for FFPEs of the "variable order" type with non-constant β(x) . In this article, variable order FFPEs (VOFFPE) are derived from scaling limits of CTRWs. The key mathematical tool is the 1-1 correspondence of a CTRW scaling limit to a bivariate Langevin process, which tracks the cumulative sum of jumps in one component and the cumulative sum of waiting times in the other. The spatially varying anomalous exponent is modelled by spatially varying β(x) -stable Lévy noise in the waiting time component. The VOFFPE displays a spatially heterogeneous temporal scaling behaviour, with generalized diffusivity and drift coefficients whose units are length2/timeβ(x) resp. length/timeβ(x). A global change of the time scale results in a spatially varying change in diffusivity and drift. A consequence of the mathematical derivation of a VOFFPE from CTRW limits in this article is that a solution of a VOFFPE can be approximated via Monte Carlo simulations. Based on such simulations, we are able to confirm that the VOFFPE is consistent under a change of the global time scale.
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
Using digital photogrammetry to conduct an anthropometric analysis of wheelchair users.
Barros, Helda Oliveira; Soares, Marcelomárcio
2012-01-01
This study deals with using digital photogrammetry to make an anthropometric analysis of wheelchair users. To analyse the data, Digita software was used, which was made available by means of the agreement of the Design Department of the Federal University of Pernambuco--Brazil--with the Department of Ergonomics of the Technical University of Lisbon--Portugal. Data collection involved a random sample of 18 subjects and occurred in the Biomechanics Laboratory of the Maurice of Nassau Faculty, located in Recife, Pernambuco. The methodology applied comprises the steps of Ergonomic Assessment, Configuration of the Data Base, Taking Digital Photographs, Digitalising the Coordinates and Presentation of Results. 15 structural variables related to static anthropometry were analysed, and 4 functional range variables relating to dynamic anthropometry. The results were presented by analysing personal data, classified by gender, ethnicity and age; by functional analysis of the sample, classified by clinical diagnosis, results of assessing the joints, results of the evaluation through motion and postural evaluation; and of the analysis of the anthropometric sample, which indicated for each variable the number of people, the mean, the standard deviation, and the minimum, median and maximum values.
Random vectors and spatial analysis by geostatistics for geotechnical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, D.S.
1987-08-01
Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less
Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin
2006-01-01
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
Frusca, Tiziana; Todros, Tullia; Lees, Christoph; Bilardo, Caterina M
2018-02-01
Early-onset fetal growth restriction represents a particular dilemma in clinical management balancing the risk of iatrogenic prematurity with waiting for the fetus to gain more maturity, while being exposed to the risk of intrauterine death or the sequelae of acidosis. The Trial of Umbilical and Fetal Flow in Europe was a European, multicenter, randomized trial aimed to determine according to which criteria delivery should be triggered in early fetal growth restriction. We present the key findings of the primary and secondary analyses. Women with fetal abdominal circumference <10th percentile and umbilical pulsatility index >95th percentile between 26-32 weeks were randomized to 1 of 3 monitoring and delivery protocols. These were: fetal heart rate variability based on computerized cardiotocography; and early or late ductus venosus Doppler changes. A safety net based on fetal heart rate abnormalities or umbilical Doppler changes mandated delivery irrespective of randomized group. The primary outcome was normal neurodevelopmental outcome at 2 years. Among 511 women randomized, 362/503 (72%) had associated hypertensive conditions. In all, 463/503 (92%) of fetuses survived and cerebral palsy occurred in 6/443 (1%) with known outcome. Among all women there was no difference in outcome based on randomized group; however, of survivors, significantly more fetuses randomized to the late ductus venosus group had a normal outcome (133/144; 95%) than those randomized to computerized cardiotocography alone (111/131; 85%). In 118/310 (38%) of babies delivered <32 weeks, the indication was safety-net criteria: 55/106 (52%) in late ductus venosus, 37/99 (37%) in early ductus venosus, and 26/105 (25%) in computerized cardiotocography groups. Higher middle cerebral artery impedance adjusted for gestation was associated with neonatal survival without severe morbidity (odds ratio, 1.24; 95% confidence interval, 1.02-1.52) and infant survival without neurodevelopmental impairment at 2 years (odds ratio, 1.33; 95% confidence interval, 1.03-1.72) although birthweight and gestational age were more important determinants. Perinatal and 2-year outcome was better than expected in all randomized groups. Among survivors, 2-year neurodevelopmental outcome was best in those randomized to delivery based on late ductus venosus changes. Given a high rate of delivery based on the safety-net criteria, deciding delivery based on late ductus venosus changes and abnormal computerized fetal heart rate variability seems prudent. There is no rationale for delivery based on cerebral Doppler changes alone. Of note, most women with early-onset fetal growth restriction develop hypertension. Copyright © 2018 Elsevier Inc. All rights reserved.
Schneider, Kristin L; Bodenlos, Jamie S; Ma, Yunsheng; Olendzki, Barbara; Oleski, Jessica; Merriam, Philip; Crawford, Sybil; Ockene, Ira S; Pagoto, Sherry L
2008-01-01
Background Obesity is often comorbid with depression and individuals with this comorbidity fare worse in behavioral weight loss treatment. Treating depression directly prior to behavioral weight loss treatment might bolster weight loss outcomes in this population, but this has not yet been tested in a randomized clinical trial. Methods and design This randomized clinical trial will examine whether behavior therapy for depression administered prior to standard weight loss treatment produces greater weight loss than standard weight loss treatment alone. Obese women with major depressive disorder (N = 174) will be recruited from primary care clinics and the community and randomly assigned to one of the two treatment conditions. Treatment will last 2 years, and will include a 6-month intensive treatment phase followed by an 18-month maintenance phase. Follow-up assessment will occur at 6-months and 1- and 2 years following randomization. The primary outcome is weight loss. The study was designed to provide 90% power for detecting a weight change difference between conditions of 3.1 kg (standard deviation of 5.5 kg) at 1-year assuming a 25% rate of loss to follow-up. Secondary outcomes include depression, physical activity, dietary intake, psychosocial variables and cardiovascular risk factors. Potential mediators (e.g., adherence, depression, physical activity and caloric intake) of the intervention effect on weight change will also be examined. Discussion Treating depression before administering intensive health behavior interventions could potentially boost the impact on both mental and physical health outcomes. Trial registration NCT00572520 PMID:18793398
Povsic, Thomas J.; Vavalle, John P.; Aberle, Laura H.; Kasprzak, Jaroslaw D.; Cohen, Mauricio G.; Mehran, Roxana; Bode, Christoph; Buller, Christopher E.; Montalescot, Gilles; Cornel, Jan H.; Rynkiewicz, Andrzej; Ring, Michael E.; Zeymer, Uwe; Natarajan, Madhu; Delarche, Nicolas; Zelenkofske, Steven L.; Becker, Richard C.; Alexander, John H.
2013-01-01
Aims We sought to determine the degree of anticoagulation reversal required to mitigate bleeding, and assess the feasibility of using pegnivacogin to prevent ischaemic events in acute coronary syndrome (ACS) patients managed with an early invasive approach. REG1 consists of pegnivacogin, an RNA aptamer selective factor IXa inhibitor, and its complementary controlling agent, anivamersen. REG1 has not been studied in invasively managed patients with ACS nor has an optimal level of reversal allowing safe sheath removal been defined. Methods and results Non-ST-elevation ACS patients (n = 640) with planned early cardiac catheterization via femoral access were randomized 2:1:1:2:2 to pegnivacogin with 25, 50, 75, or 100% anivamersen reversal or heparin. The primary endpoint was total ACUITY bleeding through 30 days. Secondary endpoints included major bleeding and the composite of death, myocardial infarction, urgent target vessel revascularization, or recurrent ischaemia. Enrolment in the 25% reversal arm was suspended after 41 patients. Enrolment was stopped after three patients experienced allergic-like reactions. Bleeding occurred in 65, 34, 35, 30, and 31% of REG1 patients with 25, 50, 75, and 100% reversal and heparin. Major bleeding occurred in 20, 11, 8, 7, and 10% of patients. Ischaemic events occurred in 3.0 and 5.7% of REG1 and heparin patients, respectively. Conclusion At least 50% reversal is required to allow safe sheath removal after cardiac catheterization. REG1 appears a safe strategy to anticoagulate ACS patients managed invasively and warrants further investigation in adequately powered clinical trials of patients who require short-term high-intensity anticoagulation. Clinical Trials Registration: ClinicalTrials.gov NCT00932100. PMID:22859796
Bano, Luca; Drigo, Ilenia; Tonon, Elena; Berto, Giacomo; Tavella, Alexander; Woudstra, Cedric; Capello, Katia; Agnoletti, Fabrizio
2015-12-01
Bovine botulism is a sporadic acute disease that usually causes catastrophic losses in the herds. The unusual clinical evolution of a persistent mild outbreak in a dairy herd, prompted us to characterize the neurotoxin gene profile of the strain involved and to evaluate whether seroconversion had occurred. Diagnosis was based on mild classical symptoms and was supported by PCR and bacteriological findings, which revealed the involvement of a non-mosaic type C strain. An in-house ELISA was developed to detect antibodies to botulinum neurotoxin type C and its performance was evaluated in a vaccination study. Fifty days after the index case, fecal and serum samples were collected from the 14 animals of the herd and screened for Clostridium botulinum and anti-botulinum neurotoxin antibodies type C, respectively. The in-house developed ELISA was also used to test 100 sera samples randomly collected from 20 herds. Strong ELISA reactions were observed in 3 convalescent and 5 asymptomatic animals involved in the studied outbreak. The ELISA-positive cows all tested positive for non-mosaic C. botulinum type C in the feces and the same strain was also detected in the alfalfa hay, suspected to be the carrier source. Ten out of the 100 randomly collected sera tested positive for anti-botulinum neurotoxin type C antibodies: 7 had borderline values and 3 from the same herd showed titers three times higher than the cut-off. We concluded that type C botulism in cattle may occur with variable severity and that prolonged exposure to sublethal doses of botulinum neurotoxin C may occur, resulting in detectable antibodies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kumar, Sandhya B; Hamilton, Barbara C; Wood, Stephanie G; Rogers, Stanley J; Carter, Jonathan T; Lin, Matthew Y
2018-03-01
Laparoscopic sleeve gastrectomy (LSG) has become popular due to its technical ease and excellent short-term results. Understanding the risk profile of LSG compared with the gold standard laparoscopic Roux-en-Y gastric bypass (LRYGB) is critical for patient selection. To use traditional regression techniques and random forest classification algorithms to compare LSG with LRYGB using the 2015 Metabolic and Bariatric Surgery Accreditation and Quality Improvement Data Registry. United States. Outcomes were leak, morbidity, and mortality within 30 days. Variable importance was assessed using random forest algorithms. Multivariate models were created in a training set and evaluated on the testing set with receiver operating characteristic curves. The adjusted odds of each outcome were compared. Of 134,142 patients, 93,062 (69%) underwent LSG and 41,080 (31%) underwent LRYGB. One hundred seventy-eight deaths occurred in 96 (.1%) of LSG patients compared with 82 (.2%) of LRYGB patients (P<.001). Morbidity occurred in 8% (5.8% in LSG versus 11.7% in LRYGB, P<.001). Leaks occurred in 1% (.8% in LSG versus 1.6% in LRYGB, P<.001). The most important predictors of all outcomes were body mass index, albumin, and age. In the adjusted multivariate models, LRYGB had higher odds of all complications (leak: odds ratio 2.10, P<.001; morbidity: odds ratio 2.02, P<.001; death: odds ratio 1.64, P<.01). In the Metabolic and Bariatric Surgery Accreditation and Quality Improvements data registry for 2015, LSG had half the risk-adjusted odds of death, serious morbidity, and leak in the first 30 days compared with LRYGB. Copyright © 2018 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.
Petersen, James H.; DeAngelis, Donald L.
1992-01-01
The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.
Blitstein, Jonathan L; Evans, W Douglas; Davis, Kevin C; Kamyab, Kian
2012-01-01
To examine changes in parent-child communication related to sexual behavior after exposure to public health messages. Randomized, controlled trial that was part of precampaign message testing. Exposure occurred online or through DVDs mailed to participants and viewed on their personal computers. Data collection occurred via a secure Web site. Participants included parents (n = 1969) living with a child age 10 to 14 years drawn from a nationally representative sample of U.S. households. Treatment participants were exposed to video, audio, and print advertisements that promoted the benefits of speaking to their children early and often about delaying initiation of sexual activity; messages also directed parents to an informational Web site. The dependent variable assessed frequency of parent-child communication related to sexual behavior. The primary independent variable was treatment assignment. Longitudinal growth modeling that included five waves of data. The trajectory of growth over time differed between fathers in the treatment group and fathers in the control group (F[1, 2357] = 4.15; p < .042), indicating more frequent communication among treatment fathers than among control fathers. Trajectories did not differ between mothers in treatment and control groups. This study demonstrates that father-child and mother-child communication patterns differ over time in response to public health messages. Findings have implication for researchers developing health marketing campaigns.
Reliability analysis of structures under periodic proof tests in service
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1976-01-01
A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.
Smooth conditional distribution function and quantiles under random censorship.
Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine
2002-09-01
We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).
Evidence for attractors in English intonation.
Braun, Bettina; Kochanski, Greg; Grabe, Esther; Rosner, Burton S
2006-06-01
Although the pitch of the human voice is continuously variable, some linguists contend that intonation in speech is restricted to a small, limited set of patterns. This claim is tested by asking subjects to mimic a block of 100 randomly generated intonation contours and then to imitate themselves in several successive sessions. The produced f0 contours gradually converge towards a limited set of distinct, previously recognized basic English intonation patterns. These patterns are "attractors" in the space of possible intonation English contours. The convergence does not occur immediately. Seven of the ten participants show continued convergence toward their attractors after the first iteration. Subjects retain and use information beyond phonological contrasts, suggesting that intonational phonology is not a complete description of their mental representation of intonation.
AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au
In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Meteorological variables to aid forecasting deep slab avalanches on persistent weak layers
Marienthal, Alex; Hendrikx, Jordy; Birkeland, Karl; Irvine, Kathryn M.
2015-01-01
Deep slab avalanches are particularly challenging to forecast. These avalanches are difficult to trigger, yet when they release they tend to propagate far and can result in large and destructive avalanches. We utilized a 44-year record of avalanche control and meteorological data from Bridger Bowl ski area in southwest Montana to test the usefulness of meteorological variables for predicting seasons and days with deep slab avalanches. We defined deep slab avalanches as those that failed on persistent weak layers deeper than 0.9 m, and that occurred after February 1st. Previous studies often used meteorological variables from days prior to avalanches, but we also considered meteorological variables over the early months of the season. We used classification trees and random forests for our analyses. Our results showed seasons with either dry or wet deep slabs on persistent weak layers typically had less precipitation from November through January than seasons without deep slabs on persistent weak layers. Days with deep slab avalanches on persistent weak layers often had warmer minimum 24-hour air temperatures, and more precipitation over the prior seven days, than days without deep slabs on persistent weak layers. Days with deep wet slab avalanches on persistent weak layers were typically preceded by three days of above freezing air temperatures. Seasonal and daily meteorological variables were found useful to aid forecasting dry and wet deep slab avalanches on persistent weak layers, and should be used in combination with continuous observation of the snowpack and avalanche activity.
Land change variability and human-environment dynamics in the United States Great Plains
Drummond, M.A.; Auch, Roger F.; Karstensen, K.A.; Sayler, K. L.; Taylor, Janis L.; Loveland, Thomas R.
2012-01-01
Land use and land cover changes have complex linkages to climate variability and change, biophysical resources, and socioeconomic driving forces. To assess these land change dynamics and their causes in the Great Plains, we compare and contrast contemporary changes across 16 ecoregions using Landsat satellite data and statistical analysis. Large-area change analysis of agricultural regions is often hampered by change detection error and the tendency for land conversions to occur at the local-scale. To facilitate a regional-scale analysis, a statistical sampling design of randomly selected 10 km × 10 km blocks is used to efficiently identify the types and rates of land conversions for four time intervals between 1973 and 2000, stratified by relatively homogenous ecoregions. Nearly 8% of the overall Great Plains region underwent land-use and land-cover change during the study period, with a substantial amount of ecoregion variability that ranged from less than 2% to greater than 13%. Agricultural land cover declined by more than 2% overall, with variability contingent on the differential characteristics of regional human–environment systems. A large part of the Great Plains is in relatively stable land cover. However, other land systems with significant biophysical and climate limitations for agriculture have high rates of land change when pushed by economic, policy, technology, or climate forcing factors. The results indicate the regionally based potential for land cover to persist or fluctuate as land uses are adapted to spatially and temporally variable forcing factors.
Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad
2015-01-01
Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.
Momentary effects of exposure to prosmoking media on college students' future smoking risk.
Shadel, William G; Martino, Steven C; Setodji, Claude; Scharf, Deborah
2012-07-01
This study used ecological momentary assessment to examine acute changes in college students' future smoking risk as a function of their exposure to prosmoking media (e.g., smoking in movies, paid advertising, point-of-sale displays). A sample of 135 college students ("ever" and "never" smokers) carried handheld computers for 21 days, recording their exposures to all forms of prosmoking media during the assessment period. They also responded to three investigator-initiated control prompts during each day of the assessment period (i.e., programmed to occur randomly). After each prosmoking media exposure and after each random control prompt they answered questions that measured their risk of future smoking. Responses between prosmoking media encounters were compared (within subjects) to responses made during random control prompts. Compliance with the study protocol was high, with participants responding to over 83% of all random prompts. Participants recorded nearly three encounters with prosmoking media each week. Results of linear mixed modeling indicated that all participants had higher future smoking risk following exposure to prosmoking media compared with control prompts (p < .05); this pattern of response did not differ between ever and never smokers (p = .769). Additional modeling of the variances around participants' risk of future smoking revealed that the response of never smokers to prosmoking media was significantly more variable than the response of ever smokers. Exposure to prosmoking media is associated with acute changes in future smoking risk, and never smokers and ever smokers respond differently to these exposures.
Non-manipulation quantitative designs.
Rumrill, Phillip D
2004-01-01
The article describes non-manipulation quantitative designs of two types, correlational and causal comparative studies. Both of these designs are characterized by the absence of random assignment of research participants to conditions or groups and non-manipulation of the independent variable. Without random selection or manipulation of the independent variable, no attempt is made to draw causal inferences regarding relationships between independent and dependent variables. Nonetheless, non-manipulation studies play an important role in rehabilitation research, as described in this article. Examples from the contemporary rehabilitation literature are included. Copyright 2004 IOS Press
Fletcher, H M; Dawkins, J; Rattray, C; Wharfe, G; Reid, M; Gordon-Strachan, G
2013-01-01
Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo.
Fletcher, H. M.; Dawkins, J.; Rattray, C.; Wharfe, G.; Reid, M.; Gordon-Strachan, G.
2013-01-01
Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo. PMID:23431314
NASA Astrophysics Data System (ADS)
Sirait, Kamson; Tulus; Budhiarti Nababan, Erna
2017-12-01
Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2017-06-01
Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI.
Valvano, Giuseppe; Martini, Nicola; Landini, Luigi; Santarelli, Maria Filomena
2016-07-01
To develop a 3D sampling strategy based on a stack of variable density spirals for compressive sensing MRI. A random sampling pattern was obtained by rotating each spiral by a random angle and by delaying for few time steps the gradient waveforms of the different interleaves. A three-dimensional (3D) variable sampling density was obtained by designing different variable density spirals for each slice encoding. The proposed approach was tested with phantom simulations up to a five-fold undersampling factor. Fully sampled 3D dataset of a human knee, and of a human brain, were obtained from a healthy volunteer. The proposed approach was tested with off-line reconstructions of the knee dataset up to a four-fold acceleration and compared with other noncoherent trajectories. The proposed approach outperformed the standard stack of spirals for various undersampling factors. The level of coherence and the reconstruction quality of the proposed approach were similar to those of other trajectories that, however, require 3D gridding for the reconstruction. The variable density randomized stack of spirals (VDR-SoS) is an easily implementable trajectory that could represent a valid sampling strategy for 3D compressive sensing MRI. It guarantees low levels of coherence without requiring 3D gridding. Magn Reson Med 76:59-69, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Sailer, Irena; Balmer, Marc; Hüsler, Jürg; Hämmerle, Christoph Hans Franz; Känel, Sarah; Thoma, Daniel Stefan
The aim of this study was to test whether posterior zirconia-ceramic (ZC) and metal-ceramic (MC) fixed dental prostheses (FDPs) exhibit similar survival and technical/biologic complication rates. A total of 58 patients in need of 76 posterior FDPs were randomly assigned to receive 40 ZC and 36 MC FDPs. The restorations were examined at baseline (cementation) and yearly for 5 years. Technical and biologic outcomes were compared. The independent treatment groups were compared with nonparametric Mann-Whitney test for metric variables and with Fisher exact test for categoric data. A total of 52 patients with 40 ZC and 29 MC FDPs were examined at 5 years. No FDP failed during the 5 years; 2 ZC FDPs failed at 65.4 and 73.3 months. Debonding occurred at 3 ZC FDPs. Technical outcomes (modified US Public Health Service criteria) and general periodontal parameters did not show significant differences between ZC and MC FDPs. ZC FDPs exhibited similar outcomes to MC FDPs based on 5-year survival estimates. The majority of technical and biologic outcome measures were not significantly different.
Adams, Josh; Takekawa, John Y.; Carter, Harry R.; Yee, Julie L.
2010-01-01
We used radiotelemetry to evaluate at-sea habitat use by Cassin's Auklets (Ptychoramphus aleuticus) that bred at Prince Island, off southern California, from 1999 through 2001. We used logistic regression to compare paired radiotelemetry (presence) with random (pseudo-absence) location-associated habitat variables derived from (1) satellite remote-sensing of sea surface temperature and chlorophyll-a concentration and (2) bathymetry. Compared with random locations within their foraging area and after controlling for distance to colony, odds ratios indicated that Cassin's Auklets with dependent young occurred in relatively shallower, warmer, and chlorophyll-rich water associated with chlorophyll fronts near the insular shelf break. These oceanographic features characterize habitats that support key euphausiid prey (e.g., Thysanoessa spinifera) and also other krill predators. Radiotelemetry combined with satellite remote-sensing of the ocean provides an alternative to vessel-based surveys for evaluating seabird foraging habitats. In the absence of information on the actual distribution, abundance, and, hence, availability of Zooplankton prey for seabirds, environmental factors can serve as proxies to help elucidate distributional patterns of seabirds at sea.
Sweazea, Karen L; Johnston, Carol S; Knurick, Jessica; Bliss, Courtney D
2017-03-04
Oxidative stress resulting from dietary, lifestyle and environmental factors is strongly associated with tissue damage and aging. It occurs when there is either an overproduction of reactive oxygen species (i.e., oxidants) or decreased bioavailability of antioxidants that can scavenge them. The objective of this 12-week double-blind placebo-controlled study was to assess the efficacy of a nutraceutical at augmenting antioxidant status. Healthy adults (25-45 y) were randomized to either a treatment group (Product B, n = 23) or a placebo group (control, n = 20). No significant effect of Product B was observed for anthropometric variables or markers of glucose and lipid regulation. Biomarkers of oxidative stress were likewise not altered following the 12-week intervention. Plasma catalase concentrations were significantly elevated following 12 weeks of Product B as compared to the control group (+6.1 vs. -10.3 nmol/min/mL, p = 0.038), whereas other measures of antioxidant capacity were not significantly different between the groups. Product B effectively augmented concentrations of the anti-aging antioxidant catalase in healthy adults.
Chevalier, Michael W.; El-Samad, Hana
2014-01-01
Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled. PMID:25481130
NASA Astrophysics Data System (ADS)
Chevalier, Michael W.; El-Samad, Hana
2014-12-01
Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions
NASA Astrophysics Data System (ADS)
Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia
2018-03-01
Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.
[Effects of radiation emitted from mobile phones on short- term heart rate variability parameters].
Yıldız, Metin; Yılmaz, Derya; Güler, Inan; Akgüllü, Cağdaş
2012-08-01
In this study, the effects of radiation emitted from mobile phone (MP) on heart rate variability (HRV) which is accepted a non-invasive indicator of autonomic nervous system (ANS) were investigated with considering the deficiency of previous studies. A randomized controlled study has been designed and utilized with 30 young and healthy volunteers. During the experiment that had three periods, the electrocardiogram (ECG) and respiration signals were recorded and MP was attached to subjects' right ear with a bone. Ten subjects selected randomly were exposed to high -level radiation during the second period (Experimental Group 1). Ten of others were exposed during the third period with maximum level radiation (Experimental Group 2). Ten records were also made while MP was closed as a control. Short -term HRV parameters were obtained and repeated measures ANOVA and suitable post-hoc tests applied to the results. According to the results of the repeated measures ANOVA; there were no significant main effects of groups. However, there were some significant differences in measuring time periods and groups*period interactions. The post-hoc tests showed that mean R to R interval and HF power are significantly changed by maximum radiation emitted from MP. Due to the radiation emitted from MPs at maximum power, some changes may occur in HRV parameters that are associated with increased parasympathetic activity. But, the level of these changes is similar to daily activities as excitement, and stand up.
Are all data created equal?--Exploring some boundary conditions for a lazy intuitive statistician.
Lindskog, Marcus; Winman, Anders
2014-01-01
The study investigated potential effects of the presentation order of numeric information on retrospective subjective judgments of descriptive statistics of this information. The studies were theoretically motivated by the assumption in the naïve sampling model of independence between temporal encoding order of data in long-term memory and retrieval probability (i.e. as implied by a "random sampling" from memory metaphor). In Experiment 1, participants experienced Arabic numbers that varied in distribution shape/variability between the first and the second half of the information sequence. Results showed no effects of order on judgments of mean, variability or distribution shape. To strengthen the interpretation of these results, Experiment 2 used a repeated judgment procedure, with an initial judgment occurring prior to the change in distribution shape of the information half-way through data presentation. The results of Experiment 2 were in line with those from Experiment 1, and in addition showed that the act of making explicit judgments did not impair accuracy of later judgments, as would be suggested by an anchoring and insufficient adjustment strategy. Overall, the results indicated that participants were very responsive to the properties of the data while at the same time being more or less immune to order effects. The results were interpreted as being in line with the naïve sampling models in which values are stored as exemplars and sampled randomly from long-term memory.
NASA Astrophysics Data System (ADS)
Pérez-Ruzafa, Angel; Quispe, Jhoni I.; Umgiesser, Georg; Ghezzo, Michol; De Pascalis, Francesca; Marcos, Concepción
2014-05-01
Fish assemblages in coastal lagoons are constituted by species with different gilds and life stories including estuarine residents but also a high percentage of marine stragglers and marine migrants. Previous studies showed that different ichthyoplancton assemblages can be identified inside a lagoon, depending on hydrological conditions, but at the same time a high spatial and temporal variability haven observed. The proposed models to explain lagoon assemblages configuration based on probabilities of colonization from the open sea involves an important stochastic component and introduces some randomness that could lead to that high spatial and temporal variability at short and long-term scales. In this work we analyze the relationship between ichthyoplankton assemblages in the Mar Menor lagoon and the adjacent open sea in the framework of the hydrodynamics of the lagoon and connectivity between sampling stations using hydrodynamic models. The results, show a complex interaction between the different factors that lead to a highly variable system with high accumulated richness and diversity of species, and a large proportion of occasional visitors and stragglers suggesting that the mechanisms of competitive lottery can play an important role in the maintenance of communities of coastal lagoons , where environmental variability occurs in a system with strong differences in colonization rates and connectivity, not only with the open sea, but also between locations within the lagoon.
Is the Non-Dipole Magnetic Field Random?
NASA Technical Reports Server (NTRS)
Walker, Andrew D.; Backus, George E.
1996-01-01
Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.
Schein, Aso; Correa, Aps; Casali, Karina Rabello; Schaan, Beatriz D
2016-01-20
Physical exercise reduces glucose levels and glucose variability in patients with type 2 diabetes. Acute inspiratory muscle exercise has been shown to reduce these parameters in a small group of patients with type 2 diabetes, but these results have yet to be confirmed in a well-designed study. The aim of this study is to investigate the effect of acute inspiratory muscle exercise on glucose levels, glucose variability, and cardiovascular autonomic function in patients with type 2 diabetes. This study will use a randomized clinical trial crossover design. A total of 14 subjects will be recruited and randomly allocated to two groups to perform acute inspiratory muscle loading at 2 % of maximal inspiratory pressure (PImax, placebo load) or 60 % of PImax (experimental load). Inspiratory muscle training could be a novel exercise modality to be used to decrease glucose levels and glucose variability. ClinicalTrials.gov NCT02292810 .
The Role of Lexical Stress on the Use of Vocal Fry in Young Adult Female Speakers.
Gibson, Todd A
2017-01-01
Vocal fry is a voice register often used by young adult women for sociolinguistic purposes. Some acoustic correlates of lexical stress, however, appear incompatible with the use of vocal fry. The objective of this study was to systematically examine the role of lexical stress in the use of vocal fry by young adult women. This is a semi-randomized controlled laboratory study. Fifty female undergraduate students were recorded repeating one-, two-, three-, and four-syllable nonwords that conformed to English phonotactics. Nonwords were presented in order from shorter to longer lengths, with stimuli randomized within syllable length. Perceptual analyses of recordings were augmented by acoustic analyses to identify each syllable in which vocal fry occurred. Eighty-six percent of participants produced at least one episode of vocal fry. Vocal fry was more likely to occur in unstressed than stressed position, and the likelihood increased as distance from the stressed syllable increased. There was considerable variability in the use of vocal fry. Frequent and infrequent users varied on the degree to which they used vocal fry in single-syllable nonwords. Vocal fry use persists among young adult women even in the absence of syntactic and pragmatic influences. Lexical stress appeared to dramatically reduce the use of vocal fry. Patterns of vocal fry use appeared to be different for frequent and infrequent users of this vocal register. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mishra, H.; Karmakar, S.; Kumar, R.
2016-12-01
Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
Highly variable sperm precedence in the stalk-eyed fly, Teleopsis dalmanni
Corley, Laura S; Cotton, Samuel; McConnell, Ellen; Chapman, Tracey; Fowler, Kevin; Pomiankowski, Andrew
2006-01-01
Background When females mate with different males, competition for fertilizations occurs after insemination. Such sperm competition is usually summarized at the level of the population or species by the parameter, P2, defined as the proportion of offspring sired by the second male in double mating trials. However, considerable variation in P2 may occur within populations, and such variation limits the utility of population-wide or species P2 estimates as descriptors of sperm usage. To fully understand the causes and consequences of sperm competition requires estimates of not only mean P2, but also intra-specific variation in P2. Here we investigate within-population quantitative variation in P2 using a controlled mating experiment and microsatellite profiling of progeny in the multiply mating stalk-eyed fly, Teleopsis dalmanni. Results We genotyped 381 offspring from 22 dam-sire pair families at four microsatellite loci. The mean population-wide P2 value of 0.40 was not significantly different from that expected under random sperm mixing (i.e. P2 = 0.5). However, patterns of paternity were highly variable between individual families; almost half of families displayed extreme second male biases resulting in zero or complete paternity, whereas only about one third of families had P2 values of 0.5, the remainder had significant, but moderate, paternity skew. Conclusion Our data suggest that all modes of ejaculate competition, from extreme sperm precedence to complete sperm mixing, occur in T. dalmanni. Thus the population mean P2 value does not reflect the high underlying variance in familial P2. We discuss some of the potential causes and consequences of post-copulatory sexual selection in this important model species. PMID:16800877
Variable Selection in the Presence of Missing Data: Imputation-based Methods.
Zhao, Yize; Long, Qi
2017-01-01
Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.
Observational studies of patients in the emergency department: a comparison of 4 sampling methods.
Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R
2012-08-01
We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Spieth, Peter M; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo
2014-05-02
General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary function and reduces systemic inflammatory response. However, it is currently not known whether patients undergoing open abdominal surgery might benefit from intraoperative variable ventilation. The PROtective VARiable ventilation trial ('PROVAR') is a single center, randomized controlled trial enrolling 50 patients who are planning for open abdominal surgery expected to last longer than 3 hours. PROVAR compares conventional (non-variable) lung protective ventilation (CV) with variable lung protective ventilation (VV) regarding pulmonary function and inflammatory response. The primary endpoint of the study is the forced vital capacity on the first postoperative day. Secondary endpoints include further lung function tests, plasma cytokine levels, spatial distribution of ventilation assessed by means of electrical impedance tomography and postoperative pulmonary complications. We hypothesize that VV improves lung function and reduces systemic inflammatory response compared to CV in patients receiving mechanical ventilation during general anesthesia for open abdominal surgery longer than 3 hours. PROVAR is the first randomized controlled trial aiming at intra- and postoperative effects of VV on lung function. This study may help to define the role of VV during general anesthesia requiring mechanical ventilation. Clinicaltrials.gov NCT01683578 (registered on September 3 3012).
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
ERIC Educational Resources Information Center
Pinsoneault, Terry B.
2007-01-01
The ability of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher et al., 2001) validity scales to detect random, partially random, and nonrandom MMPI-2 protocols was investigated. Investigations included the Variable Response Inconsistency scale (VRIN), F, several potentially useful new F and VRIN subscales, and F-sub(b) - F…
Treatment of Co-Occurring Substance Abuse and Suicidality among Adolescents: A Randomized Trial
ERIC Educational Resources Information Center
Esposito-Smythers, Christianne; Spirito, Anthony; Kahler, Christopher W.; Hunt, Jeffrey; Monti, Peter
2011-01-01
Objective: This study tested a cognitive-behavioral treatment protocol for adolescents with a co-occurring alcohol or other drug use disorder (AOD) and suicidality in a randomized clinical trial. Method: Forty adolescents (M[subscript age] = 15 years; 68% female, 89% White) and their families recruited from an inpatient psychiatric hospital were…
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
Uncertainty in Random Forests: What does it mean in a spatial context?
NASA Astrophysics Data System (ADS)
Klump, Jens; Fouedjio, Francky
2017-04-01
Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial correlation. However, in cases of weak spatial correlation Random Forest, as a nonparametric method, may give the better results once we have a better understanding of the meaning of its uncertainty measures in a spatial context. References [1] Kirkwood, C., M. Cave, D. Beamish, S. Grebby, and A. Ferreira (2016), A machine learning approach to geochemical mapping, Journal of Geochemical Exploration, 163, 28-40, doi:10.1016/j.gexplo.2016.05.003.
Investigation of spectral analysis techniques for randomly sampled velocimetry data
NASA Technical Reports Server (NTRS)
Sree, Dave
1993-01-01
It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.
Randomized clinical trial of extended use of a hydrophobic condenser humidifier: 1 vs. 7 days.
Thomachot, Laurent; Leone, Marc; Razzouk, Karim; Antonini, François; Vialet, Renaud; Martin, Claude
2002-01-01
To determine whether extended use (7 days) would affect the efficiency on heat and water preservation of a hydrophobic condenser humidifier as well as the rate of ventilation-acquired pneumonia, compared with 1 day of use. Prospective, controlled, randomized, not blinded, clinical study. Twelve-bed intensive care unit of a university hospital. One hundred and fifty-five consecutive patients undergoing mechanical ventilation for > or = 48 hrs. After randomization, patients were allocated to one of the two following groups: a) heat and moisture exchangers (HMEs) changed every 24 hrs; b) HMEs changed only once a week. Devices in both groups could be changed at the discretion of the staff when signs of occlusion or increased resistance were identified. Efficient airway humidification and heating were assessed by clinical variables (numbers of tracheal suctionings and instillations required, peak and mean airway pressures). The frequency rates of bronchial colonization and ventilation-acquired pneumonia were evaluated by using clinical and microbiological criteria. Endotracheal tube occlusion, ventilatory support variables, duration of mechanical ventilation, length of intensive care, acquired multiorgan dysfunction, and mortality rates also were recorded. The two groups were similar at the time of randomization. Endotracheal tube occlusion never occurred. In the targeted population (patients ventilated for > or = 7 days), the frequency rate of ventilation-acquired pneumonia was 24% in the HME 1-day group and 17% in the HME 7-day group (p > .05, not significant). Ventilation-acquired pneumonia rates per 1000 ventilatory support days were 16.4/1000 in the HME 1-day group and 12.4/1000 in the HME 7-day group (p > .05, not significant). No statistically significant differences were found between the two groups for duration of mechanical ventilation, intensive care unit length of stay, acquired organ system derangements, and mortality rate. There was indirect evidence of very little, if any, change in HME resistance. Changing the studied hydrophobic HME after 7 days did not affect efficiency, increase resistance, or altered bacterial colonization. The frequency rate of ventilation-acquired pneumonia was also unchanged. Use of HMEs for > 24 hrs and up to 7 days is safe.
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
1986-11-01
mother and my brother. Their support and encouragement made this research exciting and enjoyable. I am grateful to my advisor, Professor H. Vincent Poor...the model. The m! M A variance of a random variable with density given by (A. 1) is a2 KmC 2 2A(I+l’)• (A.2) With the variance of the random variable
A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
1997-01-01
Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.
2011-01-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616
Tests of Hypotheses Arising In the Correlated Random Coefficient Model*
Heckman, James J.; Schmierer, Daniel
2010-01-01
This paper examines the correlated random coefficient model. It extends the analysis of Swamy (1971), who pioneered the uncorrelated random coefficient model in economics. We develop the properties of the correlated random coefficient model and derive a new representation of the variance of the instrumental variable estimator for that model. We develop tests of the validity of the correlated random coefficient model against the null hypothesis of the uncorrelated random coefficient model. PMID:21170148
Parametric analysis for matched pair survival data.
Manatunga, A K; Oakes, D
1999-12-01
Hougaard's (1986) bivariate Weibull distribution with positive stable frailties is applied to matched pairs survival data when either or both components of the pair may be censored and covariate vectors may be of arbitrary fixed length. When there is no censoring, we quantify the corresponding gain in Fisher information over a fixed-effects analysis. With the appropriate parameterization, the results take a simple algebraic form. An alternative marginal ("independence working model") approach to estimation is also considered. This method ignores the correlation between the two survival times in the derivation of the estimator, but provides a valid estimate of standard error. It is shown that when both the correlation between the two survival times is high, and the ratio of the within-pair variability to the between-pair variability of the covariates is high, the fixed-effects analysis captures most of the information about the regression coefficient but the independence working model does badly. When the correlation is low, and/or most of the variability of the covariates occurs between pairs, the reverse is true. The random effects model is applied to data on skin grafts, and on loss of visual acuity among diabetics. In conclusion some extensions of the methods are indicated and they are placed in a wider context of Generalized Estimation Equation methodology.
A statistical model for interpreting computerized dynamic posturography data
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.
2002-01-01
Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.
King, Gillian
2017-05-01
The therapist is a neglected and poorly understood variable in pediatric rehabilitation. Much more attention has been paid to the role of intervention on client change than the role of therapist-related variables. This article synthesizes what is known in the adult and child mental health literature about the role of the therapist, and integrates this with work in pediatric rehabilitation. The article reviews the mental health literature on the therapist as a random variable associated with client outcomes (role of the therapist alone) and the role of three other therapist-related constructs: the therapist-client relationship (therapist and client), treatment implementation (therapist and intervention), and therapy process (therapist, client, and intervention considered holistically). Implications for clinical practice in pediatric rehabilitation include recognition of change as a multi-determined phenomenon involving common therapist-related factors, the therapist's role in creating facilitative conditions for change (through supportive relationships, positive expectancies, and mastery and learning experiences), and the importance of training in collaborative partnership skills. A contextual approach to therapeutic change is advocated, in which psychosocial factors and mechanisms are acknowledged, the therapist is seen as crucial, and the intervention process is seen as the context or vehicle through which changes occur.
Land Change Trends in the Great Plains: Linkages to Climate Variability and Socioeconomic Drivers
NASA Astrophysics Data System (ADS)
Drummond, M. A.
2009-12-01
Land use and land cover change have complex linkages to climate variability and change, socioeconomic driving forces, and land management challenges. To assess these land change dynamics and their driving forces in the Great Plains, we compare and contrast contemporary land conversion across seventeen ecoregions using Landsat remote sensing data and statistical analysis. Large area change analysis in agricultural regions is often hampered by the potential for substantial change detection error and the tendency for land conversions to occur in relatively small patches at the local level. To facilitate a regional scale analysis, a statistical sampling design of randomly selected 10-km by 10-km blocks is used in order to efficiently identify the types and rates of land conversions for four time periods between 1972 and 2000, stratified by relatively homogenous ecoregions. Results show a range of rates and processes of land change that vary by ecoregion contingent on the prevailing interactions between socioeconomic and environmental factors such as climate variability, water availability, and land quality. Ecoregions have differential climate and biophysical advantages for agricultural production and other land use change. Human actions further strengthen or dampen the characteristics of change through farm policy, technological advances, economic opportunities, population and demographic shifts, and surface and groundwater irrigation.
Strength training, but not endurance training, reduces motor unit discharge rate variability.
Vila-Chã, Carolina; Falla, Deborah
2016-02-01
This study evaluates and compares the effects of strength and endurance training on motor unit discharge rate variability and force steadiness of knee extensor muscles. Thirty sedentary healthy men (age, 26.0±3.8yrs) were randomly assigned to strength training, endurance training or a control group. Conventional endurance and strength training was performed 3days per week, over a period of 6weeks. Maximum voluntary contraction (MVC), time to task failure (at 30% MVC), coefficient of variation (CoV) of force and of the discharges rates of motor units from the vastus medialis obliquus and vastus lateralis were determined as subjects performed 20% and 30% MVC knee extension contractions before and after training. CoV of motor unit discharges rates was significantly reduced for both muscles following strength training (P<0.001), but did not change in the endurance (P=0.875) or control group (P=0.995). CoV of force was reduced after the strength training intervention only (P<0.01). Strength training, but not endurance training, reduces motor unit discharge rate variability and enhances force steadiness of the knee extensors. These results provide new insights into the neuromuscular adaptations that occur with different training methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Burt, Richard D; Thiede, Hanne
2014-11-01
Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
Crack detection in oak flooring lamellae using ultrasound-excited thermography
NASA Astrophysics Data System (ADS)
Pahlberg, Tobias; Thurley, Matthew; Popovic, Djordje; Hagman, Olle
2018-01-01
Today, a large number of people are manually grading and detecting defects in wooden lamellae in the parquet flooring industry. This paper investigates the possibility of using the ensemble methods random forests and boosting to automatically detect cracks using ultrasound-excited thermography and a variety of predictor variables. When friction occurs in thin cracks, they become warm and thus visible to a thermographic camera. Several image processing techniques have been used to suppress the noise and enhance probable cracks in the images. The most successful predictor variables captured the upper part of the heat distribution, such as the maximum temperature, kurtosis and percentile values 92-100 of the edge pixels. The texture in the images was captured by Completed Local Binary Pattern histograms and cracks were also segmented by background suppression and thresholding. The classification accuracy was significantly improved from previous research through added image processing, introduction of more predictors, and by using automated machine learning. The best ensemble methods reach an average classification accuracy of 0.8, which is very close to the authors' own manual attempt at separating the images (0.83).
Ratio index variables or ANCOVA? Fisher's cats revisited.
Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S
2010-01-01
Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.
Screening large-scale association study data: exploiting interactions using random forests.
Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul
2004-12-10
Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.
NASA Astrophysics Data System (ADS)
Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.
2009-05-01
The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.
First passage properties of a generalized Pólya urn
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2016-12-01
A generalized two-component Pólya urn process, parameterized by a variable α , is studied in terms of the likelihood that due to fluctuations the initially smaller population in a scenario of competing population growth eventually becomes the larger, or is the larger after a certain passage of time. By casting the problem as an inhomogeneous directed random walk we quantify this role-reversal phenomenon through the first passage probability that equality in size is first reached at a given time, and the related exit probability that equality in size is reached no later than a given time. Using an embedding technique, exact results are obtained which complement existing results and provide new insights into behavioural changes (akin to phase transitions) which occur at defined values of α .
NASA Astrophysics Data System (ADS)
Astuti Thamrin, Sri; Taufik, Irfan
2018-03-01
Dengue haemorrhagic fever (DHF) is an infectious disease caused by dengue virus. The increasing number of people with DHF disease correlates with the neighbourhood, for example sub-districts, and the characteristics of the sub-districts are formed from individuals who are domiciled in the sub-districts. Data containing individuals and sub-districts is a hierarchical data structure, called multilevel analysis. Frequently encountered response variable of the data is the time until an event occurs. Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in DHF survival. Using a case study approach, we report on the implications of using multilevel with spatial survival models to study geographical inequalities in all cause survival.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2018-01-01
A problem of mathematical modeling of complex stochastic processes in macroeconomics is discussed. For the description of dynamics of income and capital stock, the well-known Kaldor model of business cycles is used as a basic example. The aim of the paper is to give an overview of the variety of stochastic phenomena which occur in Kaldor model forced by additive and parametric random noise. We study a generation of small- and large-amplitude stochastic oscillations, and their mixed-mode intermittency. To analyze these phenomena, we suggest a constructive approach combining the study of the peculiarities of deterministic phase portrait, and stochastic sensitivity of attractors. We show how parametric noise can stabilize the unstable equilibrium and transform dynamics of Kaldor system from order to chaos.
Nergiz, Humeyra; Tabur, Mehmet Ali; Ayvaz, Yusuf
2013-08-01
Diurnal time-activity budgets of White-headed Ducks were investigated with respect to sex and temporal environmental variables to document behavioral responses to winter conditions and nutritional requirements at Burdur Lake where the largest winter concentrations occur. Behaviors of males and females were recorded separately in randomly selected focal flocks during 1140 sessions. For the entire population a large proportion of time was spent resting. During the day they spent 61% of time resting, 22% feeding, 12% comfort and 5% in locomotion. Resting peaked in the middle of day while feeding was observed frequently in evening and morning. Time use did not differ significantly between sexes. However, it was detected that more time was spent feeding during windy days as wave-height increased.
Random variability explains apparent global clustering of large earthquakes
Michael, A.J.
2011-01-01
The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.
Probabilistic evaluation of fuselage-type composite structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
A methodology is developed to computationally simulate the uncertain behavior of composite structures. The uncertain behavior includes buckling loads, natural frequencies, displacements, stress/strain etc., which are the consequences of the random variation (scatter) of the primitive (independent random) variables in the constituent, ply, laminate and structural levels. This methodology is implemented in the IPACS (Integrated Probabilistic Assessment of Composite Structures) computer code. A fuselage-type composite structure is analyzed to demonstrate the code's capability. The probability distribution functions of the buckling loads, natural frequency, displacement, strain and stress are computed. The sensitivity of each primitive (independent random) variable to a given structural response is also identified from the analyses.
Bayesian dynamic modeling of time series of dengue disease case counts.
Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander
2017-07-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.
Financial Management of a Large Multi-site Randomized Clinical Trial
Sheffet, Alice J.; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E.; Longbottom, Mary E.; Howard, Virginia J.; Marler, John R.; Brott, Thomas G.
2014-01-01
Background The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years’ funding ($21,112,866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2,500 randomized participants at 40 sites. Aims Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Methods Projections of the original grant’s fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant’s fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Results Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2,500 targeted sample size, 138 (5.5%) were randomized during the first five years and 1,387 (55.5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13,845) of the projected per-patient costs ($152,992) of the fixed model. Conclusions Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. PMID:24661748
Financial management of a large multisite randomized clinical trial.
Sheffet, Alice J; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E; Longbottom, Mary E; Howard, Virginia J; Marler, John R; Brott, Thomas G
2014-08-01
The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years' funding ($21 112 866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2500 randomized participants at 40 sites. Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Projections of the original grant's fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant's fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2500 targeted sample size, 138 (5·5%) were randomized during the first five years and 1387 (55·5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13 845) of the projected per-patient costs ($152 992) of the fixed model. Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. © 2014 The Authors. International Journal of Stroke © 2014 World Stroke Organization.
NASA Astrophysics Data System (ADS)
Tesoriero, Anthony J.; Gronberg, Jo Ann; Juckem, Paul F.; Miller, Matthew P.; Austin, Brian P.
2017-08-01
Machine learning techniques were applied to a large (n > 10,000) compliance monitoring database to predict the occurrence of several redox-active constituents in groundwater across a large watershed. Specifically, random forest classification was used to determine the probabilities of detecting elevated concentrations of nitrate, iron, and arsenic in the Fox, Wolf, Peshtigo, and surrounding watersheds in northeastern Wisconsin. Random forest classification is well suited to describe the nonlinear relationships observed among several explanatory variables and the predicted probabilities of elevated concentrations of nitrate, iron, and arsenic. Maps of the probability of elevated nitrate, iron, and arsenic can be used to assess groundwater vulnerability and the vulnerability of streams to contaminants derived from groundwater. Processes responsible for elevated concentrations are elucidated using partial dependence plots. For example, an increase in the probability of elevated iron and arsenic occurred when well depths coincided with the glacial/bedrock interface, suggesting a bedrock source for these constituents. Furthermore, groundwater in contact with Ordovician bedrock has a higher likelihood of elevated iron concentrations, which supports the hypothesis that groundwater liberates iron from a sulfide-bearing secondary cement horizon of Ordovician age. Application of machine learning techniques to existing compliance monitoring data offers an opportunity to broadly assess aquifer and stream vulnerability at regional and national scales and to better understand geochemical processes responsible for observed conditions.
Tesoriero, Anthony J.; Gronberg, Jo Ann M.; Juckem, Paul F.; Miller, Matthew P.; Austin, Brian P.
2017-01-01
Machine learning techniques were applied to a large (n > 10,000) compliance monitoring database to predict the occurrence of several redox-active constituents in groundwater across a large watershed. Specifically, random forest classification was used to determine the probabilities of detecting elevated concentrations of nitrate, iron, and arsenic in the Fox, Wolf, Peshtigo, and surrounding watersheds in northeastern Wisconsin. Random forest classification is well suited to describe the nonlinear relationships observed among several explanatory variables and the predicted probabilities of elevated concentrations of nitrate, iron, and arsenic. Maps of the probability of elevated nitrate, iron, and arsenic can be used to assess groundwater vulnerability and the vulnerability of streams to contaminants derived from groundwater. Processes responsible for elevated concentrations are elucidated using partial dependence plots. For example, an increase in the probability of elevated iron and arsenic occurred when well depths coincided with the glacial/bedrock interface, suggesting a bedrock source for these constituents. Furthermore, groundwater in contact with Ordovician bedrock has a higher likelihood of elevated iron concentrations, which supports the hypothesis that groundwater liberates iron from a sulfide-bearing secondary cement horizon of Ordovician age. Application of machine learning techniques to existing compliance monitoring data offers an opportunity to broadly assess aquifer and stream vulnerability at regional and national scales and to better understand geochemical processes responsible for observed conditions.
Lindquist, Lee A; Ramirez-Zohfeld, Vanessa; Sunkara, Priya D; Forcucci, Chris; Campbell, Dianne S; Mitzen, Phyllis; Ciolino, Jody D; Kricke, Gayle; Seltzer, Anne; Ramirez, Ana V; Cameron, Kenzie A
2017-11-01
Investigate the effect of PlanYourLifespan.org (PYL) on knowledge of posthospital discharge options. Multisite randomized controlled trial. Nonhospitalized adults, aged =65 years, living in urban, suburban, and rural areas of Texas, Illinois, and Indiana. PYL is a national, publicly available tool that provides education on posthospital therapy choices and local home-based resources. Participants completed an in-person baseline survey, followed by exposure to intervention or attention control (AC) websites, then 1-month and 3-month telephone surveys. The primary knowledge outcome was measured with 6 items (possible 0-6 points) pertaining to hospital discharge needs. Among 385 participants randomized, mean age was 71.9 years (standard deviation 5.6) and 79.5% of participants were female. At 1 month, the intervention group had a 0.6 point change (standard deviation = 1.6) versus the AC group who had a -0.1 point change in knowledge score. Linear mixed modeling results suggest sex, health literacy level, level of education, income, and history of high blood pressure/kidney disease were significant predictors of knowledge over time. Controlling for these variables, treatment effect remained significant (P < 0.0001). Seniors who used PYL demonstrated an increased understanding of posthospitalization and home services compared to the control group. © 2017 Society of Hospital Medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banfield, J.F.; Veblen, D.R.; Jones, B.F.
1991-10-01
Abert Lake in south-central Oregon provides a site suitable for the study of sequential weathering and diagenetic events. In this first of two papers, transmission electron microscopy was used to characterize the igneous mineralogy, subsolidus alteration assemblage, and the structural and chemical aspects of silicate weathering reactions that occur in the volcanic rocks that outcrop around the lake. Olivine and pyroxene replacement occurred topotactically, whereas feldspar and glass alteration produced randomly oriented smectite in channels and cavities. The tetrahedral, octahedral, and interlayer compositions of the weathering products, largely dioctahedral smectites, varied with primary mineral composition, rock type, and as themore » result of addition of elements released from adjacent reaction sites. The variability within and between the smectite assemblages highlights the microenvironmental diversity, fluctuating redox conditions, and variable solution chemistry associated with mineral weathering reactions in the surficial environment. Late-stage exhalative and aqueous alteration of the volcanics redistributed many components and formed a variety of alkali and alkali-earth carbonate, chloride, sulfate, and fluoride minerals in vugs and cracks. Overall, substantial Mg, Si, Na, Ca, and K are released by weathering reactions that include the almost complete destruction of the Mg-smectite that initially replaced olivine. The leaching of these elements from the volcanics provides an important source of these constituents in the lake water. The nature of subsequent diagenetic reactions resulting from the interaction between the materials transported to the lake and the solution will be described in part.« less
NASA Astrophysics Data System (ADS)
Gill, G.; Sakrani, T.; Cheng, W.; Zhou, J.
2017-09-01
Traffic safety is a major concern in the transportation industry due to immense monetary and emotional burden caused by crashes of various severity levels, especially the injury and fatality ones. To reduce such crashes on all public roads, the safety management processes are commonly implemented which include network screening, problem diagnosis, countermeasure identification, and project prioritization. The selection of countermeasures for potential mitigation of crashes is governed by the influential factors which impact roadway crashes. Crash prediction model is the tool widely adopted by safety practitioners or researchers to link various influential factors to crash occurrences. Many different approaches have been used in the past studies to develop better fitting models which also exhibit prediction accuracy. In this study, a crash prediction model is developed to investigate the vehicular crashes occurring at roadway segments. The spatial and temporal nature of crash data is exploited to form a spatiotemporal model which accounts for the different types of heterogeneities among crash data and geometric or traffic flow variables. This study utilizes the Poisson lognormal model with random effects, which can accommodate the yearly variations in explanatory variables and the spatial correlations among segments. The dependency of different factors linked with roadway geometric, traffic flow, and road surface type on vehicular crashes occurring at segments was established as the width of lanes, posted speed limit, nature of pavement, and AADT were found to be correlated with vehicle crashes.
Visualizing Time-Varying Distribution Data in EOS Application
NASA Technical Reports Server (NTRS)
Shen, Han-Wei
2004-01-01
In this research, we have developed several novel visualization methods for spatial probability density function data. Our focus has been on 2D spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We developed novel clustering algorithms as a means to reduce the information contained in these datasets; and investigated different ways of interpreting and clustering the data.
Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions
König, Sandra; Schauer, Stefan
2016-01-01
Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572
Strategic Use of Random Subsample Replication and a Coefficient of Factor Replicability
ERIC Educational Resources Information Center
Katzenmeyer, William G.; Stenner, A. Jackson
1975-01-01
The problem of demonstrating replicability of factor structure across random variables is addressed. Procedures are outlined which combine the use of random subsample replication strategies with the correlations between factor score estimates across replicate pairs to generate a coefficient of replicability and confidence intervals associated with…
Simulation of the Effects of Random Measurement Errors
ERIC Educational Resources Information Center
Kinsella, I. A.; Hannaidh, P. B. O.
1978-01-01
Describes a simulation method for measurement of errors that requires calculators and tables of random digits. Each student simulates the random behaviour of the component variables in the function and by combining the results of all students, the outline of the sampling distribution of the function can be obtained. (GA)
Theoretical size distribution of fossil taxa: analysis of a null model
Reed, William J; Hughes, Barry D
2007-01-01
Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249
2013-01-01
Background The use of restricted randomisation methods such as minimisation is increasing. This paper investigates under what conditions it is preferable to use restricted randomisation in order to achieve balance between treatment groups at baseline with regard to important prognostic factors and whether trialists should be concerned that minimisation may be considered deterministic. Methods Using minimisation as the randomisation algorithm, treatment allocation was simulated for hypothetical patients entering a theoretical study having values for prognostic factors randomly assigned with a stipulated probability. The number of times the allocation could have been determined with certainty and the imbalances which might occur following randomisation using minimisation were examined. Results Overall treatment balance is relatively unaffected by reducing the probability of allocation to optimal treatment group (P) but within-variable balance can be affected by any P <1. This effect is magnified by increased numbers of prognostic variables, the number of categories within them and the prevalence of these categories within the study population. Conclusions In general, for smaller trials, probability of treatment allocation to the treatment group with fewer numbers requires a larger value P to keep treatment and variable groups balanced. For larger trials probability of allocation values from P = 0.5 to P = 0.8 can be used while still maintaining balance. For one prognostic variable there is no significant benefit in terms of predictability in reducing the value of P. However, for more than one prognostic variable, significant reduction in levels of predictability can be achieved with the appropriate choice of P for the given trial design. PMID:23537389
McPherson, Gladys C; Campbell, Marion K; Elbourne, Diana R
2013-03-27
The use of restricted randomisation methods such as minimisation is increasing. This paper investigates under what conditions it is preferable to use restricted randomisation in order to achieve balance between treatment groups at baseline with regard to important prognostic factors and whether trialists should be concerned that minimisation may be considered deterministic. Using minimisation as the randomisation algorithm, treatment allocation was simulated for hypothetical patients entering a theoretical study having values for prognostic factors randomly assigned with a stipulated probability. The number of times the allocation could have been determined with certainty and the imbalances which might occur following randomisation using minimisation were examined. Overall treatment balance is relatively unaffected by reducing the probability of allocation to optimal treatment group (P) but within-variable balance can be affected by any P <1. This effect is magnified by increased numbers of prognostic variables, the number of categories within them and the prevalence of these categories within the study population. In general, for smaller trials, probability of treatment allocation to the treatment group with fewer numbers requires a larger value P to keep treatment and variable groups balanced. For larger trials probability of allocation values from P = 0.5 to P = 0.8 can be used while still maintaining balance. For one prognostic variable there is no significant benefit in terms of predictability in reducing the value of P. However, for more than one prognostic variable, significant reduction in levels of predictability can be achieved with the appropriate choice of P for the given trial design.
2013-01-01
Background Psychological stress occurs when an individual perceives that environmental demands tax or exceed his or her adaptive capacity. Its association with severe health and emotional diseases, points out the necessity to find new efficient strategies to treat it. Moreover, psychological stress is a very personal problem and requires training focused on the specific needs of individuals. To overcome the above limitations, the INTERSTRESS project suggests the adoption of a new paradigm for e-health - Interreality - that integrates contextualized assessment and treatment within a hybrid environment, bridging the physical and the virtual worlds. According to this premise, the aim of this study is to investigate the advantages of using advanced technologies, in combination with cognitive behavioral therapy (CBT), based on a protocol for reducing psychological stress. Methods/Design The study is designed as a randomized controlled trial. It includes three groups of approximately 50 subjects each who suffer from psychological stress: (1) the experimental group, (2) the control group, (3) the waiting list group. Participants included in the experimental group will receive a treatment based on cognitive behavioral techniques combined with virtual reality, biofeedback and mobile phone, while the control group will receive traditional stress management CBT-based training, without the use of new technologies. The wait-list group will be reassessed and compared with the two other groups five weeks after the initial evaluation. After the reassessment, the wait-list patients will randomly receive one of the two other treatments. Psychometric and physiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as the qualitative dependent variable. Discussion What we would like to show with the present trial is that bridging virtual experiences, used to learn coping skills and emotional regulation, with real experiences using advanced technologies (virtual reality, advanced sensors and smartphones) is a feasible way to address actual limitations of existing protocols for psychological stress. Trial registration http://clinicaltrials.gov/ct2/show/NCT01683617 PMID:23806013
An evaluation of satellite-derived humidity and its relationship to convective development
NASA Technical Reports Server (NTRS)
Fuelberg, Henry E.
1993-01-01
An aircraft prototype of the High-Resolution Interferometer Sounder (HIS) was flown over Tennessee and northern Alabama during summer 1986. The HIS temperature and dewpoint soundings were examined on two flight days to determine their error characteristics and utility in mesoscale analyses. Random errors were calculated from structure functions while total errors were obtained by pairing the HIS soundings with radiosonde-derived profiles. Random temperature errors were found to be less than 1 C at most levels, but random dewpoint errors ranged from 1 to 5 C. Total errors of both parameters were considerably greater, with dewpoint errors especially large on the day having a pronounced subsidence inversion. Cumulus cloud cover on 15 June limited HIS mesoscale analyses on that day. Previously undetected clouds were found in many HIS fields of view, and these probably produced the low-level horizontal temperature and dewpoint variations observed in the retrievals. HIS dewpoints at 300 mb indicated a strong moisture gradient that was confirmed by GOES 6.7-micron imagery. HIS mesoscale analyses on 19 June revealed a tongue of humid air stretching across the study area. The moist region was confirmed by radiosonde data and imagery from the Multispectral Atmospheric Mapping Sensor (MAMS). Convective temperatures derived from HIS retrievals helped explain the cloud formation that occurred after the HIS overflights. Crude estimates of Bowen ratio were obtained from HIS data using a mixing-line approach. Values indicated that areas of large sensible heat flux were the areas of first cloud development. These locations were also suggested by GOES visible and infrared imagery. The HIS retrievals indicated that areas of thunderstorm formation were regions of greatest instability. Local landscape variability and atmospheric temperature and humidity fluctuations were found to be important factors in producing the cumulus clouds on 19 June. HIS soundings were capable of detecting some of this variability. The authors were impressed by HIS's performance on the two study days.
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
NASA Astrophysics Data System (ADS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
Compiling probabilistic, bio-inspired circuits on a field programmable analog array
Marr, Bo; Hasler, Jennifer
2014-01-01
A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
SETI and SEH (Statistical Equation for Habitables)
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-01-01
The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.
Harrison, Rosamund; Veronneau, Jacques; Leroux, Brian
2010-05-13
The goal of this cluster randomized trial is to test the effectiveness of a counseling approach, Motivational Interviewing, to control dental caries in young Aboriginal children. Motivational Interviewing, a client-centred, directive counseling style, has not yet been evaluated as an approach for promotion of behaviour change in indigenous communities in remote settings. Aboriginal women were hired from the 9 communities to recruit expectant and new mothers to the trial, administer questionnaires and deliver the counseling to mothers in the test communities. The goal is for mothers to receive the intervention during pregnancy and at their child's immunization visits. Data on children's dental health status and family dental health practices will be collected when children are 30-months of age. The communities were randomly allocated to test or control group by a random "draw" over community radio. Sample size and power were determined based on an anticipated 20% reduction in caries prevalence. Randomization checks were conducted between groups. In the 5 test and 4 control communities, 272 of the original target sample size of 309 mothers have been recruited over a two-and-a-half year period. A power calculation using the actual attained sample size showed power to be 79% to detect a treatment effect. If an attrition fraction of 4% per year is maintained, power will remain at 80%. Power will still be > 90% to detect a 25% reduction in caries prevalence. The distribution of most baseline variables was similar for the two randomized groups of mothers. However, despite the random assignment of communities to treatment conditions, group differences exist for stage of pregnancy and prior tooth extractions in the family. Because of the group imbalances on certain variables, control of baseline variables will be done in the analyses of treatment effects. This paper explains the challenges of conducting randomized trials in remote settings, the importance of thorough community collaboration, and also illustrates the likelihood that some baseline variables that may be clinically important will be unevenly split in group-randomized trials when the number of groups is small. This trial is registered as ISRCTN41467632.
2010-01-01
Background The goal of this cluster randomized trial is to test the effectiveness of a counseling approach, Motivational Interviewing, to control dental caries in young Aboriginal children. Motivational Interviewing, a client-centred, directive counseling style, has not yet been evaluated as an approach for promotion of behaviour change in indigenous communities in remote settings. Methods/design Aboriginal women were hired from the 9 communities to recruit expectant and new mothers to the trial, administer questionnaires and deliver the counseling to mothers in the test communities. The goal is for mothers to receive the intervention during pregnancy and at their child's immunization visits. Data on children's dental health status and family dental health practices will be collected when children are 30-months of age. The communities were randomly allocated to test or control group by a random "draw" over community radio. Sample size and power were determined based on an anticipated 20% reduction in caries prevalence. Randomization checks were conducted between groups. Discussion In the 5 test and 4 control communities, 272 of the original target sample size of 309 mothers have been recruited over a two-and-a-half year period. A power calculation using the actual attained sample size showed power to be 79% to detect a treatment effect. If an attrition fraction of 4% per year is maintained, power will remain at 80%. Power will still be > 90% to detect a 25% reduction in caries prevalence. The distribution of most baseline variables was similar for the two randomized groups of mothers. However, despite the random assignment of communities to treatment conditions, group differences exist for stage of pregnancy and prior tooth extractions in the family. Because of the group imbalances on certain variables, control of baseline variables will be done in the analyses of treatment effects. This paper explains the challenges of conducting randomized trials in remote settings, the importance of thorough community collaboration, and also illustrates the likelihood that some baseline variables that may be clinically important will be unevenly split in group-randomized trials when the number of groups is small. Trial registration This trial is registered as ISRCTN41467632. PMID:20465831
van Waart, Hanna; Stuiver, Martijn M; van Harten, Wim H; Geleijn, Edwin; de Maaker-Berkhof, Marianne; Schrama, Jolanda; Geenen, Maud M; Meerum Terwogt, Jetske M; van den Heiligenberg, Simone M; Hellendoorn-van Vreeswijk, Jeannette A J H; Sonke, Gabe S; Aaronson, Neil K
2018-01-01
We report the recruitment rate, reasons for and factors influencing non-participation, and descriptive results of a randomized controlled trial of two different exercise programs for patients with colon cancer undergoing adjuvant chemotherapy. Participants were randomized to a low-intensity, home-based program (Onco-Move), a moderate- to high-intensity, combined supervised resistance and aerobic exercise program (OnTrack), or Usual Care. Non-participants provided reasons for non-participation and were asked to complete a questionnaire assessing behavioral and attitudinal variables. Trial participants completed performance-based and self-reported outcome measures prior to randomization, at the end of chemotherapy, and at the 6-month follow-up. Twenty-three of 63 referred patients agreed to participate in the trial. All 40 non-participants provided reasons for non-participation. Forty-five percent of the non-participants completed the questionnaire. Those who did not want to exercise had higher fatigue scores at baseline and a more negative attitude toward exercise. Compliance to both programs was high and no adverse events occurred. On average, the colon cancer participants were able to maintain or improve their physical fitness levels and maintain or decrease their fatigue levels during chemotherapy and follow-up. Recruitment of patients with colon cancer to a physical exercise trial during adjuvant chemotherapy proved to be difficult, underscoring the need to develop more effective strategies to increase participation rates. Both home-based and supervised programs are safe and feasible in patients with colon cancer undergoing chemotherapy. Effectiveness needs to be established in a larger trial. Netherlands Trial Register - NTR2159.
Momentary Effects of Exposure to Pro-Smoking Media on College Students’ Future Smoking Risk
Shadel, William G.; Martino, Steven C.; Setodji, Claude; Scharf, Deborah
2012-01-01
Objective This study used ecological momentary assessment to examine acute changes in college students’ future smoking risk as a function of their exposure to pro-smoking media (e.g., smoking in movies, paid advertising, point-of-sale promotions). Methods A sample of 135 college students (ever and never smokers) carried handheld computers for 21 days, recording their exposures to all forms of pro-smoking media during the assessment period. They also responded to three investigator-initiated control prompts during each day of the assessment period (i.e., programmed to occur randomly). After each pro-media smoking exposure and after each random control prompt they answered questions that measured their risk of future smoking. Responses between pro-smoking media encounters were compared to responses made during random control prompts. Results Compliance with the study protocol was high, with participants responding to over 83% of all random prompts. Participants recorded nearly three encounters with pro-smoking media each week. Results of linear mixed modeling indicated that all participants had higher future smoking risk following exposure to pro-smoking media compared with control prompts (p < 0.05); this pattern of response did not differ between ever and never smokers (p = 0.769). Additional modeling of the variances around participants’ risk of future smoking revealed that the response of never smokers to pro-smoking media was significantly more variable than the response of ever smokers. Conclusions Exposure to pro-smoking media is associated with acute changes in future smoking risk, and never smokers and ever smokers respond differently to these exposures. PMID:22353027
Random trinomial tree models and vanilla options
NASA Astrophysics Data System (ADS)
Ganikhodjaev, Nasir; Bayram, Kamola
2013-09-01
In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.
NASA Astrophysics Data System (ADS)
Fan, Linfeng; Lehmann, Peter; Or, Dani
2015-04-01
Naturally-occurring spatial variations in soil properties (e.g., soil depth, moisture, and texture) affect key hydrological processes and potentially the mechanical response of soil to hydromechanical loading (relative to the commonly-assumed uniform soil mantle). We quantified the effects of soil spatial variability on the triggering of rainfall-induced shallow landslides at the hillslope- and catchment-scales, using a physically-based landslide triggering model that considers interacting soil columns with mechanical strength thresholds (represented by the Fiber Bundle Model). The spatial variations in soil properties are represented as Gaussian random distributions and the level of variation is characterized by the coefficient of variation and correlation lengths of soil properties (i.e., soil depth, soil texture and initial water content in this study). The impacts of these spatial variations on landslide triggering characteristics were measured by comparing the times to triggering and landslide volumes for heterogeneous soil properties and homogeneous cases. Results at hillslope scale indicate that for spatial variations of an individual property (without cross correlation), the increasing of coefficient of variation introduces weak spots where mechanical damage is accelerated and leads to earlier onset of landslide triggering and smaller volumes. Increasing spatial correlation length of soil texture and initial water content also induces early landslide triggering and small released volumes due to the transition of failure mode from brittle to ductile failure. In contrast, increasing spatial correlation length of soil depth "reduces" local steepness and postpones landslide triggering. Cross-correlated soil properties generally promote landslide initiation, but depending on the internal structure of spatial distribution of each soil property, landslide triggering may be reduced. The effects of cross-correlation between initial water content and soil texture were investigated in detail at the catchment scale by incorporating correlations of both variables with topography. Results indicate that the internal structure of the spatial distribution of each soil property together with their interplays determine the overall performance of the coupled spatial variability. This study emphasizes the importance of both the randomness and spatial structure of soil properties on landslide triggering and characteristics.
Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport
NASA Astrophysics Data System (ADS)
Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike
2017-04-01
Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.
Branching random walk with step size coming from a power law
NASA Astrophysics Data System (ADS)
Bhattacharya, Ayan; Subhra Hazra, Rajat; Roy, Parthanil
2015-09-01
In their seminal work, Brunet and Derrida made predictions on the random point configurations associated with branching random walks. We shall discuss the limiting behavior of such point configurations when the displacement random variables come from a power law. In particular, we establish that two prediction of remains valid in this setup and investigate various other issues mentioned in their paper.
Jabbour, Richard J; Shun-Shin, Matthew J; Finegold, Judith A; Afzal Sohaib, S M; Cook, Christopher; Nijjer, Sukhjinder S; Whinnett, Zachary I; Manisty, Charlotte H; Brugada, Josep; Francis, Darrel P
2015-01-06
Biventricular pacing (CRT) shows clear benefits in heart failure with wide QRS, but results in narrow QRS have appeared conflicting. We tested the hypothesis that study design might have influenced findings. We identified all reports of CRT-P/D therapy in subjects with narrow QRS reporting effects on continuous physiological variables. Twelve studies (2074 patients) met these criteria. Studies were stratified by presence of bias-resistance steps: the presence of a randomized control arm over a single arm, and blinded outcome measurement. Change in each endpoint was quantified using a standardized effect size (Cohen's d). We conducted separate meta-analyses for each variable in turn, stratified by trial quality. In non-randomized, non-blinded studies, the majority of variables (10 of 12, 83%) showed significant improvement, ranging from a standardized mean effect size of +1.57 (95%CI +0.43 to +2.7) for ejection fraction to +2.87 (+1.78 to +3.95) for NYHA class. In the randomized, non-blinded study, only 3 out of 6 variables (50%) showed improvement. For the randomized blinded studies, 0 out of 9 variables (0%) showed benefit, ranging from -0.04 (-0.31 to +0.22) for ejection fraction to -0.1 (-0.73 to +0.53) for 6-minute walk test. Differences in degrees of resistance to bias, rather than choice of endpoint, explain the variation between studies of CRT in narrow-QRS heart failure addressing physiological variables. When bias-resistance features are implemented, it becomes clear that these patients do not improve in any tested physiological variable. Guidance from studies without careful planning to resist bias may be far less useful than commonly perceived. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Samet, Jeffrey H.; Raj, Anita; Cheng, Debbie M.; Blokhina, Elena; Bridden, Carly; Chaisson, Christine E.; Walley, Alexander Y.; Palfai, Tibor P.; Quinn, Emily K.; Zvartau, Edwin; Lioznov, Dmitry; Krupitsky, Evgeny
2014-01-01
Aims This study assessed the effectiveness of HERMITAGE (HIV’s Evolution in Russia - Mitigating Infection Transmission and Alcoholism in a Growing Epidemic), an adapted secondary HIV prevention intervention, compared with an attention control condition in decreasing sexually transmitted infections (STIs) and sex and drug risk behaviors among Russian HIV-infected heavy drinkers. Design We conducted a single-blinded, two-armed, randomized controlled trial with 12-month follow-up. Setting The study was conducted in St. Petersburg, Russia. Participants were recruited from four HIV and addiction clinical sites. The intervention was conducted at Botkin Infectious Disease Hospital. Participants HIV-infected persons with past 6-month risky sex and heavy alcohol consumption (n=700) were randomized to the HERMITAGE intervention (n=350) or an attention control condition (n=350). Intervention A Healthy Relationships Intervention stressing disclosure of HIV serostatus and condom use, adapted for a Russian clinical setting with two individual sessions and three small group sessions. Measurements The primary outcome was incident STI by laboratory test at 12-month follow-up. Secondary outcomes included change in unprotected sex and several alcohol and injection drug use (IDU) variables. Findings Participants had the following baseline characteristics: 59% male, mean age 30, 60% past year IDU, 15.4% prevalent STI and mean CD4 cell count 413/μl. Assessment occurred among 75% and 71% of participants at 6 and 12-months, respectively. STIs occurred in 20 subjects (8%) in the intervention group and 28 subjects (12%) in the control group at 12-month follow-up; logistic regression analyses found no significant difference between groups (adjusted odds ratio 0.69; 95% CI: 0.36-1.30; P=0.25). Both groups decreased unsafe behaviors, although no significant differences between groups were found. Conclusions The HERMITAGE HIV risk reduction intervention does not appear to reduce sexually transmitted infections and HIV risk behaviors in Russian HIV-infected heavy drinkers compared with attention controls. PMID:25170994
Hosseinpour, Mehdi; Yahaya, Ahmad Shukri; Sadullah, Ahmad Farhan
2014-01-01
Head-on crashes are among the most severe collision types and of great concern to road safety authorities. Therefore, it justifies more efforts to reduce both the frequency and severity of this collision type. To this end, it is necessary to first identify factors associating with the crash occurrence. This can be done by developing crash prediction models that relate crash outcomes to a set of contributing factors. This study intends to identify the factors affecting both the frequency and severity of head-on crashes that occurred on 448 segments of five federal roads in Malaysia. Data on road characteristics and crash history were collected on the study segments during a 4-year period between 2007 and 2010. The frequency of head-on crashes were fitted by developing and comparing seven count-data models including Poisson, standard negative binomial (NB), random-effect negative binomial, hurdle Poisson, hurdle negative binomial, zero-inflated Poisson, and zero-inflated negative binomial models. To model crash severity, a random-effect generalized ordered probit model (REGOPM) was used given a head-on crash had occurred. With respect to the crash frequency, the random-effect negative binomial (RENB) model was found to outperform the other models according to goodness of fit measures. Based on the results of the model, the variables horizontal curvature, terrain type, heavy-vehicle traffic, and access points were found to be positively related to the frequency of head-on crashes, while posted speed limit and shoulder width decreased the crash frequency. With regard to the crash severity, the results of REGOPM showed that horizontal curvature, paved shoulder width, terrain type, and side friction were associated with more severe crashes, whereas land use, access points, and presence of median reduced the probability of severe crashes. Based on the results of this study, some potential countermeasures were proposed to minimize the risk of head-on crashes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Blanc-Bisson, C; Dechamps, A; Gouspillou, G; Dehail, P; Bourdel-Marchasson, I
2008-01-01
To evaluate effects of early intensive physiotherapy during acute illness on post hospitalization activity daily living autonomy (ADL). Prospective randomized controlled trial of intensive physiotherapy rehabilitation on day 1 to 2 after admission until clinical stability or usual care. acute care geriatric medicine ward. A total of 76 acutely ill patients, acutely bedridden or with reduced mobility but who were autonomous for mobility within the previous 3 months. Patients in palliative care or with limiting mobility pathology were excluded. Mean age was 85.4 (SD 6.6) years. At admission, at clinical stability and one month later: anthropometry, energy and protein intakes, hand grip strength, ADL scores, and baseline inflammatory parameters. An exploratory principal axis analysis was performed on the baseline characteristics and general linear models were used to explore the course of ADL and nutritional variables. A 4-factor solution was found explaining 71.7% of variance with a factor "nutrition", a factor "function" (18.8% of variance) for ADL, handgrip strength, bedridden state, energy and protein intakes, serum albumin and C-reactive protein concentrations; a factor "strength" and a fourth factor . During follow-up, dietary intakes, handgrip strength, and ADL scores improved but no changes occurred for anthropometric variables. Intervention was associated only with an increase in protein intake. Better improvement in ADL was found in intervention group when model was adjusted on "function" factor items. Physical intervention programs should be proposed according to nutritional intakes with the aim of preventing illness induced disability.
Walitzer, Kimberly S.; Dermen, Kurt H.; Barrick, Christopher; Shyhalla, Kathleen
2015-01-01
Widespread adoption of empirically-supported treatment innovations has the potential to improve effectiveness of treatment received by individuals with substance use disorders. However, the process of disseminating such innovations has been complex, slow, and difficult. We empirically describe the dissemination and adoption of a treatment innovation – an alcohol-treatment preparatory therapeutic procedure based on motivational interviewing (MI) – in the context of Rogers’ (2003) five stages of innovation-decision process (knowledge, persuasion, decision, implementation and confirmation). To this end, 145 randomly-chosen outpatient addiction treatment clinics in New York State received an onsite visit from a project trainer delivering one of three randomly-assigned dissemination intensities: a 15-minute, a half-day or a full-day presentation. Across these clinics, 141 primary administrators and 837 clinicians completed questionnaires assessing aspects of five innovation-decision stages. At each clinic, questionnaire administration occurred immediately pre- and post-dissemination, as well as one and six months after dissemination. Consistent with Rogers’ theory, earlier stages of the innovation-decision process predicted later stages. As hypothesized, dissemination intensity predicted clinicians’ post-dissemination knowledge. Clinician baseline characteristics (including gender, pre-dissemination knowledge regarding the MI preparatory technique, education, case load, beliefs regarding the nature of alcohol problems, and beliefs and behavior with regard to therapeutic style) predicted knowledge and persuasion stage variables. One baseline clinic characteristic (i.e., clinic mean beliefs and behavior regarding an MI-consistent therapeutic style) predicted implementation stage variables. Findings suggest that dissemination strategies should accommodate clinician and clinic characteristics. PMID:25934460
Lin, Yi Hung; Tu, Yu Kang; Lu, Chun Tai; Chung, Wen Chen; Huang, Chiung Fang; Huang, Mao Suan; Lu, Hsein Kun
2014-01-01
Repigmentation variably occurs with different treatment methods in patients with gingival pigmentation. A systemic review was conducted of various treatment modalities for eliminating melanin pigmentation of the gingiva, comprising bur abrasion, scalpel surgery, cryosurgery, electrosurgery, gingival grafts, and laser techniques, to compare the recurrence rates (Rrs) of these treatment procedures. Electronic databases, including PubMed, Web of Science, Google, and Medline were comprehensively searched, and manual searches were conducted for studies published from January 1951 to June 2013. After applying inclusion and exclusion criteria, the final list of articles was reviewed in depth to achieve the objectives of this review. A Poisson regression was used to analyze the outcome of depigmentation using the various treatment methods. The systematic review was based on case reports mainly. In total, 61 eligible publications met the defined criteria. The various therapeutic procedures showed variable clinical results with a wide range of Rrs. A random-effects Poisson regression showed that cryosurgery (Rr = 0.32%), electrosurgery (Rr = 0.74%), and laser depigmentation (Rr = 1.16%) yielded superior result, whereas bur abrasion yielded the highest Rr (8.89%). Within the limit of the sampling level, the present evidence-based results show that cryosurgery exhibits the optimal predictability for depigmentation of the gingiva among all procedures examined, followed by electrosurgery and laser techniques. It is possible to treat melanin pigmentation of the gingiva with various methods and prevent repigmentation. Among those treatment modalities, cryosurgery, electrosurgery, and laser surgery appear to be the best choices for treating gingival pigmentation. © 2014 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, A., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr; Song, J.; Hwang, H., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr
In order to obtain reliable multilevel cell (MLC) characteristics, resistance controllability between the different resistance levels is required especially in resistive random access memory (RRAM), which is prone to resistance variability mainly due to its intrinsic random nature of defect generation and filament formation. In this study, we have thoroughly investigated the multilevel resistance variability in a TaO{sub x}-based nanoscale (<30 nm) RRAM operated in MLC mode. It is found that the resistance variability not only depends on the conductive filament size but also is a strong function of oxygen vacancy concentration in it. Based on the gained insights through experimentalmore » observations and simulation, it is suggested that forming thinner but denser conductive filament may greatly improve the temporal resistance variability even at low operation current despite the inherent stochastic nature of resistance switching process.« less
Chertkov, Michael; Gabitov, Ildar
2004-03-02
The present invention provides methods and optical fibers for periodically pinning an actual (random) accumulated chromatic dispersion of an optical fiber to a predicted accumulated dispersion of the fiber through relatively simple modifications of fiber-optic manufacturing methods or retrofitting of existing fibers. If the pinning occurs with sufficient frequency (at a distance less than or are equal to a correlation scale), pulse degradation resulting from random chromatic dispersion is minimized. Alternatively, pinning may occur quasi-periodically, i.e., the pinning distance is distributed between approximately zero and approximately two to three times the correlation scale.
Use of allele scores as instrumental variables for Mendelian randomization
Burgess, Stephen; Thompson, Simon G
2013-01-01
Background An allele score is a single variable summarizing multiple genetic variants associated with a risk factor. It is calculated as the total number of risk factor-increasing alleles for an individual (unweighted score), or the sum of weights for each allele corresponding to estimated genetic effect sizes (weighted score). An allele score can be used in a Mendelian randomization analysis to estimate the causal effect of the risk factor on an outcome. Methods Data were simulated to investigate the use of allele scores in Mendelian randomization where conventional instrumental variable techniques using multiple genetic variants demonstrate ‘weak instrument’ bias. The robustness of estimates using the allele score to misspecification (for example non-linearity, effect modification) and to violations of the instrumental variable assumptions was assessed. Results Causal estimates using a correctly specified allele score were unbiased with appropriate coverage levels. The estimates were generally robust to misspecification of the allele score, but not to instrumental variable violations, even if the majority of variants in the allele score were valid instruments. Using a weighted rather than an unweighted allele score increased power, but the increase was small when genetic variants had similar effect sizes. Naive use of the data under analysis to choose which variants to include in an allele score, or for deriving weights, resulted in substantial biases. Conclusions Allele scores enable valid causal estimates with large numbers of genetic variants. The stringency of criteria for genetic variants in Mendelian randomization should be maintained for all variants in an allele score. PMID:24062299
Pettigrew, Jonathan; Miller-Day, Michelle; Krieger, Janice L.; Zhou, Jiangxiu; Hecht, Michael L.
2014-01-01
Random assignment to groups is the foundation for scientifically rigorous clinical trials. But assignment is challenging in group randomized trials when only a few units (schools) are assigned to each condition. In the DRSR project, we assigned 39 rural Pennsylvania and Ohio schools to three conditions (rural, classic, control). But even with 13 schools per condition, achieving pretest equivalence on important variables is not guaranteed. We collected data on six important school-level variables: rurality, number of grades in the school, enrollment per grade, percent white, percent receiving free/assisted lunch, and test scores. Key to our procedure was the inclusion of school-level drug use data, available for a subset of the schools. Also, key was that we handled the partial data with modern missing data techniques. We chose to create one composite stratifying variable based on the seven school-level variables available. Principal components analysis with the seven variables yielded two factors, which were averaged to form the composite inflate-suppress (CIS) score which was the basis of stratification. The CIS score was broken into three strata within each state; schools were assigned at random to the three program conditions from within each stratum, within each state. Results showed that program group membership was unrelated to the CIS score, the two factors making up the CIS score, and the seven items making up the factors. Program group membership was not significantly related to pretest measures of drug use (alcohol, cigarettes, marijuana, chewing tobacco; smallest p>.15), thus verifying that pretest equivalence was achieved. PMID:23722619
Some practical problems in implementing randomization.
Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet
2010-06-01
While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.
Evaluation of Lightning Jumps as a Predictor of Severe Weather in the Northeastern United States
NASA Astrophysics Data System (ADS)
Eck, Pamela
Severe weather events in the northeastern United States can be challenging to forecast, given how the evolution of deep convection can be influenced by complex terrain and the lack of quality observations in complex terrain. To supplement existing observations, this study explores using lightning to forecast severe convection in areas of complex terrain in the northeastern United States. A sudden increase in lightning flash rate by two standard deviations (2sigma), also known as a lightning jump, may be indicative of a strengthening updraft and an increased probability of severe weather. This study assesses the value of using lightning jumps to forecast severe weather during July 2015 in the northeastern United States. Total lightning data from the National Lightning Detection Network (NLDN) is used to calculate lightning jumps using a 2sigma lightning jump algorithm with a minimum threshold of 5 flashes min-1. Lightning jumps are used to predict the occurrence of severe weather, as given by whether a Storm Prediction Center (SPC) severe weather report occurred 45 min after a lightning jump in the same cell. Results indicate a high probability of detection (POD; 85%) and a high false alarm rate (FAR; 89%), suggesting that lightning jumps occur in sub-severe storms. The interaction between convection and complex terrain results in a locally enhanced updraft and an increased probability of severe weather. Thus, it is hypothesized that conditioning on an upslope variable may reduce the FAR. A random forest is introduced to objectively combine upslope flow, calculated using data from the High Resolution Rapid Refresh (HRRR), flash rate (FR), and flash rate changes with time (DFRDT). The random forest, a machine-learning algorithm, uses pattern recognition to predict a severe or non-severe classification based on the predictors. In addition to upslope flow, FR, and DFRDT, Next-Generation Radar (NEXRAD) Level III radar data was also included as a predictor to compare its value to that of lightning data. Results indicate a high POD (82%), a low FAR (28%), and that lightning data and upslope flow data account for 39% and 32% of variable importance, respectively.
Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.
Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric
2018-07-01
Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
HDMR methods to assess reliability in slope stability analyses
NASA Astrophysics Data System (ADS)
Kozubal, Janusz; Pula, Wojciech; Vessia, Giovanna
2014-05-01
Stability analyses of complex rock-soil deposits shall be tackled considering the complex structure of discontinuities within rock mass and embedded soil layers. These materials are characterized by a high variability in physical and mechanical properties. Thus, to calculate the slope safety factor in stability analyses two issues must be taken into account: 1) the uncertainties related to structural setting of the rock-slope mass and 2) the variability in mechanical properties of soils and rocks. High Dimensional Model Representation (HDMR) (Chowdhury et al. 2009; Chowdhury and Rao 2010) can be used to carry out the reliability index within complex rock-soil slopes when numerous random variables with high coefficient of variations are considered. HDMR implements the inverse reliability analysis, meaning that the unknown design parameters are sought provided that prescribed reliability index values are attained. Such approach uses implicit response functions according to the Response Surface Method (RSM). The simple RSM can be efficiently applied when less than four random variables are considered; as the number of variables increases, the efficiency in reliability index estimation decreases due to the great amount of calculations. Therefore, HDMR method is used to improve the computational accuracy. In this study, the sliding mechanism in Polish Flysch Carpathian Mountains have been studied by means of HDMR. The Southern part of Poland where Carpathian Mountains are placed is characterized by a rather complicated sedimentary pattern of flysh rocky-soil deposits that can be simplified into three main categories: (1) normal flysch, consisting of adjacent sandstone and shale beds of approximately equal thickness, (2) shale flysch, where shale beds are thicker than adjacent sandstone beds, and (3) sandstone flysch, where the opposite holds. Landslides occur in all flysch deposit types thus some configurations of possible unstable settings (within fractured rocky-soil masses) resulting in sliding mechanisms have been investigated in this study. The reliability indices values drawn from the HDRM method have been compared with conventional approaches as neural networks: the efficiency of HDRM is shown in the case studied. References Chowdhury R., Rao B.N. and Prasad A.M. 2009. High-dimensional model representation for structural reliability analysis. Commun. Numer. Meth. Engng, 25: 301-337. Chowdhury R. and Rao B. 2010. Probabilistic Stability Assessment of Slopes Using High Dimensional Model Representation. Computers and Geotechnics, 37: 876-884.
Miller, Jennifer R B; Jhala, Yadvendradev V; Jena, Jyotirmay; Schmitz, Oswald J
2015-03-01
Innovative conservation tools are greatly needed to reduce livelihood losses and wildlife declines resulting from human-carnivore conflict. Spatial risk modeling is an emerging method for assessing the spatial patterns of predator-prey interactions, with applications for mitigating carnivore attacks on livestock. Large carnivores that ambush prey attack and kill over small areas, requiring models at fine spatial grains to predict livestock depredation hot spots. To detect the best resolution for predicting where carnivores access livestock, we examined the spatial attributes associated with livestock killed by tigers in Kanha Tiger Reserve, India, using risk models generated at 20, 100, and 200-m spatial grains. We analyzed land-use, human presence, and vegetation structure variables at 138 kill sites and 439 random sites to identify key landscape attributes where livestock were vulnerable to tigers. Land-use and human presence variables contributed strongly to predation risk models, with most variables showing high relative importance (≥0.85) at all spatial grains. The risk of a tiger killing livestock increased near dense forests and near the boundary of the park core zone where human presence is restricted. Risk was nonlinearly related to human infrastructure and open vegetation, with the greatest risk occurring 1.2 km from roads, 1.1 km from villages, and 8.0 km from scrubland. Kill sites were characterized by denser, patchier, and more complex vegetation with lower visibility than random sites. Risk maps revealed high-risk hot spots inside of the core zone boundary and in several patches in the human-dominated buffer zone. Validation against known kills revealed predictive accuracy for only the 20 m model, the resolution best representing the kill stage of hunting for large carnivores that ambush prey, like the tiger. Results demonstrate that risk models developed at fine spatial grains can offer accurate guidance on landscape attributes livestock should avoid to minimize human-carnivore conflict.
Thomas, D.L.; Johnson, D.; Griffith, B.
2006-01-01
Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a Bayesian hierarchical discrete-choice model for resource selection can provide managers with 2 components of population-level inference: average population selection and variability of selection. Both components are necessary to make sound management decisions based on animal selection.
Grant, Edward M.; Young, Deborah Rohm; Wu, Tong Tong
2015-01-01
We examined associations among longitudinal, multilevel variables and girls’ physical activity to determine the important predictors for physical activity change at different adolescent ages. The Trial of Activity for Adolescent Girls 2 study (Maryland) contributed participants from 8th (2009) to 11th grade (2011) (n=561). Questionnaires were used to obtain demographic, and psychosocial information (individual- and social-level variables); height, weight, and triceps skinfold to assess body composition; interviews and surveys for school-level data; and self-report for neighborhood-level variables. Moderate to vigorous physical activity minutes were assessed from accelerometers. A doubly regularized linear mixed effects model was used for the longitudinal multilevel data to identify the most important covariates for physical activity. Three fixed effects at the individual level and one random effect at the school level were chosen from an initial total of 66 variables, consisting of 47 fixed effects and 19 random effects variables, in additional to the time effect. Self-management strategies, perceived barriers, and social support from friends were the three selected fixed effects, and whether intramural or interscholastic programs were offered in middle school was the selected random effect. Psychosocial factors and friend support, plus a school’s physical activity environment, affect adolescent girl’s moderate to vigorous physical activity longitudinally. PMID:25928064
Chan, Wai Sze; Williams, Jacob; Dautovich, Natalie D.; McNamara, Joseph P.H.; Stripling, Ashley; Dzierzewski, Joseph M.; Berry, Richard B.; McCoy, Karin J.M.; McCrae, Christina S.
2017-01-01
Study Objectives: Sleep variability is a clinically significant variable in understanding and treating insomnia in older adults. The current study examined changes in sleep variability in the course of brief behavioral therapy for insomnia (BBT-I) in older adults who had chronic insomnia. Additionally, the current study examined the mediating mechanisms underlying reductions of sleep variability and the moderating effects of baseline sleep variability on treatment responsiveness. Methods: Sixty-two elderly participants were randomly assigned to either BBT-I or self-monitoring and attention control (SMAC). Sleep was assessed by sleep diaries and actigraphy from baseline to posttreatment and at 3-month follow-up. Mixed models were used to examine changes in sleep variability (within-person standard deviations of weekly sleep parameters) and the hypothesized mediation and moderation effects. Results: Variabilities in sleep diary-assessed sleep onset latency (SOL) and actigraphy-assessed total sleep time (TST) significantly decreased in BBT-I compared to SMAC (Pseudo R2 = .12, .27; P = .018, .008). These effects were mediated by reductions in bedtime and wake time variability and time in bed. Significant time × group × baseline sleep variability interactions on sleep outcomes indicated that participants who had higher baseline sleep variability were more responsive to BBT-I; their actigraphy-assessed TST, SOL, and sleep efficiency improved to a greater degree (Pseudo R2 = .15 to .66; P < .001 to .044). Conclusions: BBT-I is effective in reducing sleep variability in older adults who have chronic insomnia. Increased consistency in bedtime and wake time and decreased time in bed mediate reductions of sleep variability. Baseline sleep variability may serve as a marker of high treatment responsiveness to BBT-I. Clinical Trial Registration: ClinicalTrials.gov, Identifier: NCT02967185 Citation: Chan WS, Williams J, Dautovich ND, McNamara JP, Stripling A, Dzierzewski JM, Berry RB, McCoy KJ, McCrae CS. Night-to-night sleep variability in older adults with chronic insomnia: mediators and moderators in a randomized controlled trial of brief behavioral therapy (BBT-I). J Clin Sleep Med. 2017;13(11):1243–1254. PMID:28992829
Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?
ERIC Educational Resources Information Center
Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.
2005-01-01
Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
Causal Inference and Omitted Variable Bias in Financial Aid Research: Assessing Solutions
ERIC Educational Resources Information Center
Riegg, Stephanie K.
2008-01-01
This article highlights the problem of omitted variable bias in research on the causal effect of financial aid on college-going. I first describe the problem of self-selection and the resulting bias from omitted variables. I then assess and explore the strengths and weaknesses of random assignment, multivariate regression, proxy variables, fixed…
A Dynamic Bayesian Network Model for the Production and Inventory Control
NASA Astrophysics Data System (ADS)
Shin, Ji-Sun; Takazaki, Noriyuki; Lee, Tae-Hong; Kim, Jin-Il; Lee, Hee-Hyol
In general, the production quantities and delivered goods are changed randomly and then the total stock is also changed randomly. This paper deals with the production and inventory control using the Dynamic Bayesian Network. Bayesian Network is a probabilistic model which represents the qualitative dependence between two or more random variables by the graph structure, and indicates the quantitative relations between individual variables by the conditional probability. The probabilistic distribution of the total stock is calculated through the propagation of the probability on the network. Moreover, an adjusting rule of the production quantities to maintain the probability of a lower limit and a ceiling of the total stock to certain values is shown.
Majid, Omer Waleed; Al-Mashhadani, Bashar Adil
2014-06-01
The purpose of the present study was to compare the effect of oral bromelain (4 × 250 mg) versus oral diclofenac sodium (4 × 25 mg) on pain, swelling, trismus, and quality of life (QOL) after surgical removal of impacted lower third molars. A randomized, double-blind, placebo-controlled study was planned. The sample included patients requiring extraction under local anesthesia of a single partial bony impacted mandibular third molar. The patients were randomly distributed into 1 of 3 groups: bromelain, diclofenac, and placebo. Treatment started 1 day before surgery and continued for 4 days. The predictor variable was the type of the drug given to the patients. The outcome variables were pain, swelling, and trismus, which were measured at 1, 3, and 7 days postoperatively. The other study variables included QOL measures to assess the patients' perception regarding the effect of surgery on their well-being and daily activities. A validated questionnaire was used to measure QOL. The data were analyzed using analysis of variance, multiple measures analysis of variance, or Pearson's χ(2) test, as appropriate. P < .05 was considered significant. A total of 45 subjects requiring surgical removal of a single impacted mandibular third molar under local anesthesia were included in the present study. The bromelain and diclofenac groups both showed a significant reduction in pain compared with the placebo group at all intervals (P < .05). Diclofenac also resulted in a significant reduction of swelling at 3 and 7 days, and bromelain resulted in an insignificant reduction. A nonsignificant reduction in trismus occurred in both treatment groups compared with the placebo group. Both treatment groups also showed a significant difference in the effect on QOL in most subscales and total scores (P < .05). The effect was comparable between the 2 treatment groups for all parameters and at all intervals. The results of our study have shown that oral bromelain is an effective therapy to improve the QOL after surgical removal of impacted lower third molars, with an effect on the postoperative sequelae comparable to that of preemptive diclofenac sodium. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Reliable gain-scheduled control of discrete-time systems and its application to CSTR model
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.
2016-10-01
This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.
Low-contrast lesion detection in tomosynthetic breast imaging using a realistic breast phantom
NASA Astrophysics Data System (ADS)
Zhou, Lili; Oldan, Jorge; Fisher, Paul; Gindi, Gene
2006-03-01
Tomosynthesis mammography is a potentially valuable technique for detection of breast cancer. In this simulation study, we investigate the efficacy of three different tomographic reconstruction methods, EM, SART and Backprojection, in the context of an especially difficult mammographic detection task. The task is the detection of a very low-contrast mass embedded in very dense fibro-glandular tissue - a clinically useful task for which tomosynthesis may be well suited. The project uses an anatomically realistic 3D digital breast phantom whose normal anatomic variability limits lesion conspicuity. In order to capture anatomical object variability, we generate an ensemble of phantoms, each of which comprises random instances of various breast structures. We construct medium-sized 3D breast phantoms which model random instances of ductal structures, fibrous connective tissue, Cooper's ligaments and power law structural noise for small scale object variability. Random instances of 7-8 mm irregular masses are generated by a 3D random walk algorithm and placed in very dense fibro-glandular tissue. Several other components of the breast phantom are held fixed, i.e. not randomly generated. These include the fixed breast shape and size, nipple structure, fixed lesion location, and a pectoralis muscle. We collect low-dose data using an isocentric tomosynthetic geometry at 11 angles over 50 degrees and add Poisson noise. The data is reconstructed using the three algorithms. Reconstructed slices through the center of the lesion are presented to human observers in a 2AFC (two-alternative-forced-choice) test that measures detectability by computing AUC (area under the ROC curve). The data collected in each simulation includes two sources of variability, that due to the anatomical variability of the phantom and that due to the Poisson data noise. We found that for this difficult task that the AUC value for EM (0.89) was greater than that for SART (0.83) and Backprojection (0.66).
Tellez, Jason A; Schmidt, Jason D
2011-08-20
The propagation of a free-space optical communications signal through atmospheric turbulence experiences random fluctuations in intensity, including signal fades, which negatively impact the performance of the communications link. The gamma-gamma probability density function is commonly used to model the scintillation of a single beam. One proposed method to reduce the occurrence of scintillation-induced fades at the receiver plane involves the use of multiple beams propagating through independent paths, resulting in a sum of independent gamma-gamma random variables. Recently an analytical model for the probability distribution of irradiance from the sum of multiple independent beams was developed. Because truly independent beams are practically impossible to create, we present here a more general but approximate model for the distribution of beams traveling through partially correlated paths. This model compares favorably with wave-optics simulations and highlights the reduced scintillation as the number of transmitted beams is increased. Additionally, a pulse-position modulation scheme is used to reduce the impact of signal fades when they occur. Analytical and simulated results showed significantly improved performance when compared to fixed threshold on/off keying. © 2011 Optical Society of America
Circular Data Images for Directional Data
NASA Technical Reports Server (NTRS)
Morpet, William J.
2004-01-01
Directional data includes vectors, points on a unit sphere, axis orientation, angular direction, and circular or periodic data. The theoretical statistics for circular data (random points on a unit circle) or spherical data (random points on a unit sphere) are a recent development. An overview of existing graphical methods for the display of directional data is given. Cross-over occurs when periodic data are measured on a scale for the measurement of linear variables. For example, if angle is represented by a linear color gradient changing uniformly from dark blue at -180 degrees to bright red at +180 degrees, the color image will be discontinuous at +180 degrees and -180 degrees, which are the same location. The resultant color would depend on the direction of approach to the cross-over point. A new graphical method for imaging directional data is described, which affords high resolution without color discontinuity from "cross-over". It is called the circular data image. The circular data image uses a circular color scale in which colors repeat periodically. Some examples of the circular data image include direction of earth winds on a global scale, rocket motor internal flow, earth global magnetic field direction, and rocket motor nozzle vector direction vs. time.
NASA Astrophysics Data System (ADS)
Popov, Pavel; Sideris, Athanasios; Sirignano, William
2014-11-01
We examine the non-linear dynamics of the transverse modes of combustion-driven acoustic instability in a liquid-propellant rocket engine. Triggering can occur, whereby small perturbations from mean conditions decay, while larger disturbances grow to a limit-cycle of amplitude that may compare to the mean pressure. For a deterministic perturbation, the system is also deterministic, computed by coupled finite-volume solvers at low computational cost for a single realization. The randomness of the triggering disturbance is captured by treating the injector flow rates, local pressure disturbances, and sudden acceleration of the entire combustion chamber as random variables. The combustor chamber with its many sub-fields resulting from many injector ports may be viewed as a multi-scale complex system wherein the developing acoustic oscillation is the emergent structure. Numerical simulation of the resulting stochastic PDE system is performed using the polynomial chaos expansion method. The overall probability of unstable growth is assessed in different regions of the parameter space. We address, in particular, the seven-injector, rectangular Purdue University experimental combustion chamber. In addition to the novel geometry, new features include disturbances caused by engine acceleration and unsteady thruster nozzle flow.
Phase Transition Behavior in a Neutral Evolution Model
NASA Astrophysics Data System (ADS)
King, Dawn; Scott, Adam; Maric, Nevena; Bahar, Sonya
2014-03-01
The complexity of interactions among individuals and between individuals and the environment make agent based modeling ideal for studying emergent speciation. This is a dynamically complex problem that can be characterized via the critical behavior of a continuous phase transition. Concomitant with the main tenets of natural selection, we allow organisms to reproduce, mutate, and die within a neutral phenotype space. Previous work has shown phase transition behavior in an assortative mating model with variable fitness landscapes as the maximum mutation size (μ) was varied (Dees and Bahar, 2010). Similarly, this behavior was recently presented in the work of Scott et al. (2013), even on a completely neutral landscape, for bacterial-like fission as well as for assortative mating. Here we present another neutral model to investigate the `critical' phase transition behavior of three mating types - assortative, bacterial, and random - in a phenotype space as a function of the percentage of random death. Results show two types of phase transitions occurring for the parameters of the population size and the number of clusters (an analogue of species), indicating different evolutionary dynamics for system survival and clustering. This research was supported by funding from: University of Missouri Research Board and James S. McDonnell Foundation.
Schilling, Jim F; Murphy, Jeff C; Bonney, John R; Thich, Jacob L
2013-07-01
Core training continues to be emphasized with the proposed intent of improving athletic performance. The purpose of this investigation was to discover if core isometric endurance exercises were superior to core isotonic strengthening exercises and if either influenced specific endurance, strength, and performance measures. Ten untrained students were randomly assigned to core isometric endurance (n = 5) and core isotonic strength training (n = 5). Each performed three exercises, two times per week for six weeks. A repeated measures ANOVA was used to compare the measurements for the dependent variables and significance by bonferroni post-hoc testing. The training protocols were compared using a 2 × 3 mixed model ANOVA. Improvement in trunk flexor and extensor endurance (p < 0.05) along with squat and bench press strength (p < 0.05) occurred with the strength group. Improvement in trunk flexor and right lateral endurance (p < 0.05) along with strength in the squat (p < 0.05) were found with the endurance group. Neither training protocol claimed superiority and both were ineffective in improving performance. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chevalier, Michael W., E-mail: Michael.Chevalier@ucsf.edu; El-Samad, Hana, E-mail: Hana.El-Samad@ucsf.edu
Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation timesmore » of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.« less
Arbitrary-step randomly delayed robust filter with application to boost phase tracking
NASA Astrophysics Data System (ADS)
Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang
2018-04-01
The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.
Wang, Wei; Griswold, Michael E
2016-11-30
The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
Wavevector-Frequency Analysis with Applications to Acoustics
1994-01-01
Turbulent Boundary Layer Pressure Measured by Microphone Arrays," Journal of the Acoustical Society of America, vol. 49, no. 3, March 1971 , pp. 862-877. 1...ARplications of Green’s FuntionsinScie,.-and Enginlering, Prentice-Hall, Inc., Englewood Hills, NJ, 1971 . 9. 3. Ffowcs-Williams et al., Modern Methods for...variables of a random process are kalled Joint w.merit ,. The m,n-th joint moment of the random variables, v and w, iz flefined by E ,N 1 f (aB) do d- where
1992-12-01
suspect :mat, -n2 extent predict:.on cas jas ccsiziveiv crrei:=e amonc e v:arious models, :he fandom *.;aik, learn ha r ur e, i;<ea- variable and Bemis...Functions, Production Rate Adjustment Model, Learning Curve Model. Random Walk Model. Bemis Model. Evaluating Model Bias, Cost Prediction Bias. Cost...of four cost progress models--a random walk model, the tradiuonai learning curve model, a production rate model Ifixed-variable model). and a model
2012-01-01
Background In order to elucidate the complex relationship between co-occurring depression and anxiety with cardiac autonomic function in the elderly, this study examined the correlation between cardiac vagal control (CVC) and pre-defined, theoretical factors from the Hospital Anxiety and Depression Scale (HADS). Methods Three hundred fifty-four randomly selected Chinese male subjects aged ≥65 years and living in the community were enrolled. CVC was measured using a frequency-domain index of heart rate variability. Results Confirmatory factor analysis showed that the flat tripartite model of HADS provided a modest advantage in model fit when compared with other theoretical factor solutions. In the flat tripartite model, there was a significant negative association between anhedonic depression and CVC. In contrast, autonomic anxiety showed a significant positive correlation with CVC. In the hierarchical tripartite model, negative affectivity was not directly associated with CVC; instead, it had positive and negative indirect effects on CVC via autonomic anxiety and anhedonic depression, respectively. As scores for negative affectivity increased, these specific indirect effects diminished. Conclusions Among competing models of co-occurring depression and anxiety, constructs from tripartite models demonstrate fair conformity with the data but unique and distinct correlations with CVC. Negative affectivity may determine the relationship of anhedonic depression and autonomic anxiety with CVC. Separating affective symptoms under the constructs of the tripartite models helps disentangle complex associations between co-occurring depression and anxiety with CVC. PMID:22846457
Burgess, Stephen; Daniel, Rhian M; Butterworth, Adam S; Thompson, Simon G
2015-01-01
Background: Mendelian randomization uses genetic variants, assumed to be instrumental variables for a particular exposure, to estimate the causal effect of that exposure on an outcome. If the instrumental variable criteria are satisfied, the resulting estimator is consistent even in the presence of unmeasured confounding and reverse causation. Methods: We extend the Mendelian randomization paradigm to investigate more complex networks of relationships between variables, in particular where some of the effect of an exposure on the outcome may operate through an intermediate variable (a mediator). If instrumental variables for the exposure and mediator are available, direct and indirect effects of the exposure on the outcome can be estimated, for example using either a regression-based method or structural equation models. The direction of effect between the exposure and a possible mediator can also be assessed. Methods are illustrated in an applied example considering causal relationships between body mass index, C-reactive protein and uric acid. Results: These estimators are consistent in the presence of unmeasured confounding if, in addition to the instrumental variable assumptions, the effects of both the exposure on the mediator and the mediator on the outcome are homogeneous across individuals and linear without interactions. Nevertheless, a simulation study demonstrates that even considerable heterogeneity in these effects does not lead to bias in the estimates. Conclusions: These methods can be used to estimate direct and indirect causal effects in a mediation setting, and have potential for the investigation of more complex networks between multiple interrelated exposures and disease outcomes. PMID:25150977
Diestelkamp, Wiebke S; Krane, Carissa M; Pinnell, Margaret F
2011-05-20
Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance.
Applications of Geostatistics in Plant Nematology
Wallace, M. K.; Hawkins, D. M.
1994-01-01
The application of geostatistics to plant nematology was made by evaluating soil and nematode data acquired from 200 soil samples collected from the Ap horizon of a reed canary-grass field in northern Minnesota. Geostatistical concepts relevant to nematology include semi-variogram modelling, kriging, and change of support calculations. Soil and nematode data generally followed a spherical semi-variogram model, with little random variability associated with soil data and large inherent variability for nematode data. Block kriging of soil and nematode data provided useful contour maps of the data. Change of snpport calculations indicated that most of the random variation in nematode data was due to short-range spatial variability in the nematode population densities. PMID:19279938
Applications of geostatistics in plant nematology.
Wallace, M K; Hawkins, D M
1994-12-01
The application of geostatistics to plant nematology was made by evaluating soil and nematode data acquired from 200 soil samples collected from the A(p) horizon of a reed canary-grass field in northern Minnesota. Geostatistical concepts relevant to nematology include semi-variogram modelling, kriging, and change of support calculations. Soil and nematode data generally followed a spherical semi-variogram model, with little random variability associated with soil data and large inherent variability for nematode data. Block kriging of soil and nematode data provided useful contour maps of the data. Change of snpport calculations indicated that most of the random variation in nematode data was due to short-range spatial variability in the nematode population densities.
Mathematical and physical meaning of the Bell inequalities
NASA Astrophysics Data System (ADS)
Santos, Emilio
2016-09-01
It is shown that the Bell inequalities are closely related to the triangle inequalities involving distance functions amongst pairs of random variables with values \\{0,1\\}. A hidden variables model may be defined as a mapping between a set of quantum projection operators and a set of random variables. The model is noncontextual if there is a joint probability distribution. The Bell inequalities are necessary conditions for its existence. The inequalities are most relevant when measurements are performed at space-like separation, thus showing a conflict between quantum mechanics and local realism (Bell's theorem). The relations of the Bell inequalities with contextuality, the Kochen-Specker theorem, and quantum entanglement are briefly discussed.
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
PATTERN PREDICTION OF ACADEMIC SUCCESS.
ERIC Educational Resources Information Center
LUNNEBORG, CLIFFORD E.; LUNNEBORG, PATRICIA W.
A TECHNIQUE OF PATTERN ANALYSIS WHICH EMPHASIZES THE DEVELOPMENT OF MORE EFFECTIVE WAYS OF SCORING A GIVEN SET OF VARIABLES WAS FORMULATED. TO THE ORIGINAL VARIABLES WERE SUCCESSIVELY ADDED TWO, THREE, AND FOUR VARIABLE PATTERNS AND THE INCREASE IN PREDICTIVE EFFICIENCY ASSESSED. RANDOMLY SELECTED HIGH SCHOOL SENIORS WHO HAD PARTICIPATED IN THE…
Measurement variability error for estimates of volume change
James A. Westfall; Paul L. Patterson
2007-01-01
Using quality assurance data, measurement variability distributions were developed for attributes that affect tree volume prediction. Random deviations from the measurement variability distributions were applied to 19381 remeasured sample trees in Maine. The additional error due to measurement variation and measurement bias was estimated via a simulation study for...
ERIC Educational Resources Information Center
Reardon, Sean F.; Unlu, Faith; Zhu, Pei; Bloom, Howard
2013-01-01
We explore the use of instrumental variables (IV) analysis with a multi-site randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, as assumption known in the instrumental variables literature as the…
Ireys, H T; Gross, S S; Werthamer-Larsson, L A; Kolodner, K B
1994-12-01
The relationships between selected condition characteristics and self-esteem were investigated in a randomly drawn, community-based sample of 286 young adults with chronic illnesses and disabilities. Whether appraisals of the impact of the condition mediated relationships between condition characteristics and self-esteem, as measured by the Rosenberg Self-Esteem Scale, was also measured. As a group, the youth in this sample reported positive self-esteem. When sociodemographic and condition-related variables were considered simultaneously, maternal education, unpredictability of symptoms, prognosis, sensory impairment, and the presence of a co-occurring learning disability were found to have direct effects on esteem. Perceived impact mediated the relationship between condition characteristics and self-esteem. The results are discussed in relation to the role of impact appraisal in determining the emotional well-being of young adults with chronic illnesses.
Broken Ergodicity in Ideal, Homogeneous, Incompressible Turbulence
NASA Technical Reports Server (NTRS)
Morin, Lee; Shebalin, John; Fu, Terry; Nguyen, Phu; Shum, Victor
2010-01-01
We discuss the statistical mechanics of numerical models of ideal homogeneous, incompressible turbulence and their relevance for dissipative fluids and magnetofluids. These numerical models are based on Fourier series and the relevant statistical theory predicts that Fourier coefficients of fluid velocity and magnetic fields (if present) are zero-mean random variables. However, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to the associated standard deviation. We explain this phenomena in terms of broken ergodicity', which is defined to occur when dynamical behavior does not match ensemble predictions on very long time-scales. We review the theoretical basis of broken ergodicity, apply it to 2-D and 3-D fluid and magnetohydrodynamic simulations of homogeneous turbulence, and show new results from simulations using GPU (graphical processing unit) computers.
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Testing homogeneity in Weibull-regression models.
Bolfarine, Heleno; Valença, Dione M
2005-10-01
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.
Taylor, C; Parker, J; Stratford, J; Warren, M
2018-05-01
Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.
Williams, L M; Debattista, C; Duchemin, A-M; Schatzberg, A F; Nemeroff, C B
2016-05-03
Few reliable predictors indicate which depressed individuals respond to antidepressants. Several studies suggest that a history of early-life trauma predicts poorer response to antidepressant therapy but results are variable and limited in adults. The major goal of the present study was to evaluate the role of early-life trauma in predicting acute response outcomes to antidepressants in a large sample of well-characterized patients with major depressive disorder (MDD). The international Study to Predict Optimized Treatment for Depression (iSPOT-D) is a randomized clinical trial with enrollment from December 2008 to January 2012 at eight academic and nine private clinical settings in five countries. Patients (n=1008) meeting DSM-IV criteria for MDD and 336 matched healthy controls comprised the study sample. Six participants withdrew due to serious adverse events. Randomization was to 8 weeks of treatment with escitalopram, sertraline or venlafaxine with dosage adjusted by the participant's treating clinician per routine clinical practice. Exposure to 18 types of traumatic events before the age of 18 was assessed using the Early-Life Stress Questionnaire. Impact of early-life stressors-overall trauma 'load' and specific type of abuse-on treatment outcomes measures: response: (⩾50% improvement on the 17-item Hamilton Rating Scale for Depression, HRSD17 or on the 16-item Quick Inventory of Depressive Symptomatology-Self-Rated, QIDS_SR16) and remission (score ⩽7 on the HRSD17 and ⩽5 on the QIDS_SR16). Trauma prevalence in MDD was compared with controls. Depressed participants were significantly more likely to report early-life stress than controls; 62.5% of MDD participants reported more than two traumatic events compared with 28.4% of controls. The higher rate of early-life trauma was most apparent for experiences of interpersonal violation (emotional, sexual and physical abuses). Abuse and notably abuse occurring at ⩽7 years of age predicted poorer outcomes after 8 weeks of antidepressants, across the three treatment arms. In addition, the abuses occurring between ages 4 and 7 years differentially predicted the poorest outcome following the treatment with sertraline. Specific types of early-life trauma, particularly physical, emotional and sexual abuse, especially when occurring at ⩽7 years of age are important moderators of subsequent response to antidepressant therapy for MDD.
Williams, L M; Debattista, C; Duchemin, A-M; Schatzberg, A F; Nemeroff, C B
2016-01-01
Few reliable predictors indicate which depressed individuals respond to antidepressants. Several studies suggest that a history of early-life trauma predicts poorer response to antidepressant therapy but results are variable and limited in adults. The major goal of the present study was to evaluate the role of early-life trauma in predicting acute response outcomes to antidepressants in a large sample of well-characterized patients with major depressive disorder (MDD). The international Study to Predict Optimized Treatment for Depression (iSPOT-D) is a randomized clinical trial with enrollment from December 2008 to January 2012 at eight academic and nine private clinical settings in five countries. Patients (n=1008) meeting DSM-IV criteria for MDD and 336 matched healthy controls comprised the study sample. Six participants withdrew due to serious adverse events. Randomization was to 8 weeks of treatment with escitalopram, sertraline or venlafaxine with dosage adjusted by the participant's treating clinician per routine clinical practice. Exposure to 18 types of traumatic events before the age of 18 was assessed using the Early-Life Stress Questionnaire. Impact of early-life stressors—overall trauma ‘load' and specific type of abuse—on treatment outcomes measures: response: (⩾50% improvement on the 17-item Hamilton Rating Scale for Depression, HRSD17 or on the 16-item Quick Inventory of Depressive Symptomatology—Self-Rated, QIDS_SR16) and remission (score ⩽7 on the HRSD17 and ⩽5 on the QIDS_SR16). Trauma prevalence in MDD was compared with controls. Depressed participants were significantly more likely to report early-life stress than controls; 62.5% of MDD participants reported more than two traumatic events compared with 28.4% of controls. The higher rate of early-life trauma was most apparent for experiences of interpersonal violation (emotional, sexual and physical abuses). Abuse and notably abuse occurring at ⩽7 years of age predicted poorer outcomes after 8 weeks of antidepressants, across the three treatment arms. In addition, the abuses occurring between ages 4 and 7 years differentially predicted the poorest outcome following the treatment with sertraline. Specific types of early-life trauma, particularly physical, emotional and sexual abuse, especially when occurring at ⩽7 years of age are important moderators of subsequent response to antidepressant therapy for MDD. PMID:27138798
Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M
2017-12-01
The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.
Condensation with two constraints and disorder
NASA Astrophysics Data System (ADS)
Barré, J.; Mangeolle, L.
2018-04-01
We consider a set of positive random variables obeying two additive constraints, a linear and a quadratic one; these constraints mimic the conservation laws of a dynamical system. In the simplest setting, without disorder, it is known that such a system may undergo a ‘condensation’ transition, whereby one random variable becomes much larger than the others; this transition has been related to the spontaneous appearance of non linear localized excitations in certain nonlinear chains, called breathers. Motivated by the study of breathers in a disordered discrete nonlinear Schrödinger equation, we study different instances of this problem in presence of a quenched disorder. Unless the disorder is too strong, the phase diagram looks like the one without disorder, with a transition separating a fluid phase, where all variables have the same order of magnitude, and a condensed phase, where one variable is much larger than the others. We then show that the condensed phase exhibits various degrees of ‘intermediate symmetry breaking’: the site hosting the condensate is chosen neither uniformly at random, nor is it fixed by the disorder realization. Throughout the article, our heuristic arguments are complemented with direct Monte Carlo simulations.
An Undergraduate Research Experience on Studying Variable Stars
NASA Astrophysics Data System (ADS)
Amaral, A.; Percy, J. R.
2016-06-01
We describe and evaluate a summer undergraduate research project and experience by one of us (AA), under the supervision of the other (JP). The aim of the project was to sample current approaches to analyzing variable star data, and topics related to the study of Mira variable stars and their astrophysical importance. This project was done through the Summer Undergraduate Research Program (SURP) in astronomy at the University of Toronto. SURP allowed undergraduate students to explore and learn about many topics within astronomy and astrophysics, from instrumentation to cosmology. SURP introduced students to key skills which are essential for students hoping to pursue graduate studies in any scientific field. Variable stars proved to be an excellent topic for a research project. For beginners to independent research, it introduces key concepts in research such as critical thinking and problem solving, while illuminating previously learned topics in stellar physics. The focus of this summer project was to compare observations with structural and evolutionary models, including modelling the random walk behavior exhibited in the (O-C) diagrams of most Mira stars. We found that the random walk could be modelled by using random fluctuations of the period. This explanation agreed well with observations.
NASA Astrophysics Data System (ADS)
Goudarzi, Nasser
2016-04-01
In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.
Rupture Propagation for Stochastic Fault Models
NASA Astrophysics Data System (ADS)
Favreau, P.; Lavallee, D.; Archuleta, R.
2003-12-01
The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.
Modeling Achievement Trajectories when Attrition Is Informative
ERIC Educational Resources Information Center
Feldman, Betsy J.; Rabe-Hesketh, Sophia
2012-01-01
In longitudinal education studies, assuming that dropout and missing data occur completely at random is often unrealistic. When the probability of dropout depends on covariates and observed responses (called "missing at random" [MAR]), or on values of responses that are missing (called "informative" or "not missing at random" [NMAR]),…
Two approximations of the present value distribution of a disability annuity
NASA Astrophysics Data System (ADS)
Spreeuw, Jaap
2006-02-01
The distribution function of the present value of a cash flow can be approximated by means of a distribution function of a random variable, which is also the present value of a sequence of payments, but with a simpler structure. The corresponding random variable has the same expectation as the random variable corresponding to the original distribution function and is a stochastic upper bound of convex order. A sharper upper bound can be obtained if more information about the risk is available. In this paper, it will be shown that such an approach can be adopted for disability annuities (also known as income protection policies) in a three state model under Markov assumptions. Benefits are payable during any spell of disability whilst premiums are only due whenever the insured is healthy. The quality of the two approximations is investigated by comparing the distributions obtained with the one derived from the algorithm presented in the paper by Hesselager and Norberg [Insurance Math. Econom. 18 (1996) 35-42].
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Dissociable effects of practice variability on learning motor and timing skills.
Caramiaux, Baptiste; Bevilacqua, Frédéric; Wanderley, Marcelo M; Palmer, Caroline
2018-01-01
Motor skill acquisition inherently depends on the way one practices the motor task. The amount of motor task variability during practice has been shown to foster transfer of the learned skill to other similar motor tasks. In addition, variability in a learning schedule, in which a task and its variations are interweaved during practice, has been shown to help the transfer of learning in motor skill acquisition. However, there is little evidence on how motor task variations and variability schedules during practice act on the acquisition of complex motor skills such as music performance, in which a performer learns both the right movements (motor skill) and the right time to perform them (timing skill). This study investigated the impact of rate (tempo) variability and the schedule of tempo change during practice on timing and motor skill acquisition. Complete novices, with no musical training, practiced a simple musical sequence on a piano keyboard at different rates. Each novice was assigned to one of four learning conditions designed to manipulate the amount of tempo variability across trials (large or small tempo set) and the schedule of tempo change (randomized or non-randomized order) during practice. At test, the novices performed the same musical sequence at a familiar tempo and at novel tempi (testing tempo transfer), as well as two novel (but related) sequences at a familiar tempo (testing spatial transfer). We found that practice conditions had little effect on learning and transfer performance of timing skill. Interestingly, practice conditions influenced motor skill learning (reduction of movement variability): lower temporal variability during practice facilitated transfer to new tempi and new sequences; non-randomized learning schedule improved transfer to new tempi and new sequences. Tempo (rate) and the sequence difficulty (spatial manipulation) affected performance variability in both timing and movement. These findings suggest that there is a dissociable effect of practice variability on learning complex skills that involve both motor and timing constraints.
Soil variability in engineering applications
NASA Astrophysics Data System (ADS)
Vessia, Giovanna
2014-05-01
Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.
The variability of software scoring of the CDMAM phantom associated with a limited number of images
NASA Astrophysics Data System (ADS)
Yang, Chang-Ying J.; Van Metter, Richard
2007-03-01
Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.
Variability of the Baltic Sea level and floods in the Gulf of Finland
NASA Astrophysics Data System (ADS)
Kulikov, E. A.; Medvedev, I. P.
2013-03-01
The statistical analysis of the long-term data on the variability of the Baltic Sea level has revealed the complicated character of the wave field structure. The wave field formed by the variable winds and the disturbances of the atmospheric pressure in the Baltic Sea is a superposition of standing oscillations with random phases. The cross spectral analysis of the synchronous observation series of the level in the Gulf of Finland has shown that the nodal lines of the standing dilatational waves are clearly traced with frequencies corresponding to the distance from the nodal line to the top of the gulf (a quarter of the wave length). Several areas of the water basin with clearly expressed resonant properties may be distinguished: the Gulfs of Finland, Riga, and Bothnia, Neva Bay, etc. The estimations of the statistical correlation of the sea level oscillations with the variation of the wind and atmospheric pressure indicate the dominant role of the zonal wind component during the formation of the floods in the Gulf of Finland. The probable reason for the extreme floods in St. Petersburg may be the resonance rocking of the eigenmode oscillations corresponding to the basic fundamental seiche mode of the Gulf of Finland with a period of 27 h when the repeated atmospheric disturbances in the Baltic Sea occur with a period of 1-2 days.
Robinson, John D; Wares, John P; Drake, John M
2013-01-01
Extinction is ubiquitous in natural systems and the ultimate fate of all biological populations. However, the factors that contribute to population extinction are still poorly understood, particularly genetic diversity and composition. A laboratory experiment was conducted to examine the influences of environmental variation and genotype diversity on persistence in experimental Daphnia magna populations. Populations were initiated in two blocks with one, two, three, or six randomly selected and equally represented genotypes, fed and checked for extinction daily, and censused twice weekly over a period of 170 days. Our results show no evidence for an effect of the number of genotypes in a population on extinction hazard. Environmental variation had a strong effect on hazards in both experimental blocks, but the direction of the effect differed between blocks. In the first block, variable environments hastened extinction, while in the second block, hazards were reduced under variable food input. This occurred despite greater fluctuations in population size in variable environments in the second block of our experiment. Our results conflict with previous studies, where environmental variation consistently increased extinction risk. They are also at odds with previous studies in other systems that documented significant effects of genetic diversity on population persistence. We speculate that the lack of sexual reproduction, or the phenotypic similarity among our experimental lines, might underlie the lack of a significant effect of genotype diversity in our study. PMID:23467276
NASA Astrophysics Data System (ADS)
Moreno, Jackeline; Vogeley, Michael S.; Richards, Gordon; O'Brien, John T.; Kasliwal, Vishal
2018-01-01
We present rigorous testing of survey cadences (K2, SDSS, CRTS, & Pan-STARRS) for quasar variability science using a magnetohydrodynamics synthetic lightcurve and the canonical lightcurve from Kepler, Zw 229.15. We explain where the state of the art is in regards to physical interpretations of stochastic models (CARMA) applied to AGN variability. Quasar variability offers a time domain approach of probing accretion physics at the SMBH scale. Evidence shows that the strongest amplitude changes in the brightness of AGN occur on long timescales ranging from months to hundreds of days. These global behaviors can be constrained by survey data despite low sampling resolution. CARMA processes provide a flexible family of models used to interpolate between data points, predict future observations and describe behaviors in a lightcurve. This is accomplished by decomposing a signal into rise and decay timescales, frequencies for cyclic behavior and shock amplitudes. Characteristic timescales may point to length-scales over which a physical process operates such as turbulent eddies, warping or hotspots due to local thermal instabilities. We present the distribution of SDSS Stripe 82 quasars in CARMA parameters space that pass our cadence tests and also explain how the Damped Harmonic Oscillator model, CARMA(2,1), reduces to the Damped Random Walk, CARMA(1,0), given the data in a specific region of the parameter space.
Retrocausation Or Extant Indefinite Reality?
NASA Astrophysics Data System (ADS)
Houtkooper, Joop M.
2006-10-01
The possibility of retrocausation has been considered to explain the occurrence of anomalous phenomena in which the ostensible effects are preceded by their causes. A scrutiny of both experimental methodology and the experimental data is called for. A review of experimental data reveals the existence of such effects to be a serious possibility. The experimental methodology entails some conceptual difficulties, these depending on the underlying assumptions about the effects. A major point is an ambiguity between anomalous acquisition of information and retrocausation in exerted influences. A unifying theory has been proposed, based upon the fundamental randomness of quantum mechanics. Quantum mechanical randomness may be regarded as a tenacious phenomenon, that apparently is only resolved by the human observer of the random variable in question. This has led to the "observational theory" of anomalous phenomena, which is based upon the assumption that the preference of a motivated observer is able to interact with the extant indefinite random variable that is being observed. This observational theory has led to a novel prediction, which has been corroborated in experiments. Moreover, different classes of anomalous phenomena can be explained by the same basic mechanism. This foregoes retroactive causation, but, instead, requires that macroscopic physical variables remain in a state of indefinite reality and thus remain influenceable by mental efforts until these are observed. More work is needed to discover the relevant psychological and neurophysiological variables involved in effective motivated observation. Besides these practicalities, the fundamentals still have some interesting loose ends.
NASA Astrophysics Data System (ADS)
Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.
2016-05-01
wayGoo is a fully functional application whose main functionalities include content geolocation, event scheduling, and indoor navigation. However, significant information about events do not reach users' attention, either because of the size of this information or because some information comes from real - time data sources. The purpose of this work is to facilitate event management operations by prioritizing the presented events, based on users' interests using both, static and real - time data. Through the wayGoo interface, users select conceptual topics that are interesting for them. These topics constitute a browsing behavior vector which is used for learning users' interests implicitly, without being intrusive. Then, the system estimates user preferences and return an events list sorted from the most preferred one to the least. User preferences are modeled via a Naïve Bayesian Network which consists of: a) the `decision' random variable corresponding to users' decision on attending an event, b) the `distance' random variable, modeled by a linear regression that estimates the probability that the distance between a user and each event destination is not discouraging, ` the seat availability' random variable, modeled by a linear regression, which estimates the probability that the seat availability is encouraging d) and the `relevance' random variable, modeled by a clustering - based collaborative filtering, which determines the relevance of each event users' interests. Finally, experimental results show that the proposed system contribute essentially to assisting users in browsing and selecting events to attend.
A Bayesian, generalized frailty model for comet assays.
Ghebretinsae, Aklilu Habteab; Faes, Christel; Molenberghs, Geert; De Boeck, Marlies; Geys, Helena
2013-05-01
This paper proposes a flexible modeling approach for so-called comet assay data regularly encountered in preclinical research. While such data consist of non-Gaussian outcomes in a multilevel hierarchical structure, traditional analyses typically completely or partly ignore this hierarchical nature by summarizing measurements within a cluster. Non-Gaussian outcomes are often modeled using exponential family models. This is true not only for binary and count data, but also for, example, time-to-event outcomes. Two important reasons for extending this family are for (1) the possible occurrence of overdispersion, meaning that the variability in the data may not be adequately described by the models, which often exhibit a prescribed mean-variance link, and (2) the accommodation of a hierarchical structure in the data, owing to clustering in the data. The first issue is dealt with through so-called overdispersion models. Clustering is often accommodated through the inclusion of random subject-specific effects. Though not always, one conventionally assumes such random effects to be normally distributed. In the case of time-to-event data, one encounters, for example, the gamma frailty model (Duchateau and Janssen, 2007 ). While both of these issues may occur simultaneously, models combining both are uncommon. Molenberghs et al. ( 2010 ) proposed a broad class of generalized linear models accommodating overdispersion and clustering through two separate sets of random effects. Here, we use this method to model data from a comet assay with a three-level hierarchical structure. Although a conjugate gamma random effect is used for the overdispersion random effect, both gamma and normal random effects are considered for the hierarchical random effect. Apart from model formulation, we place emphasis on Bayesian estimation. Our proposed method has an upper hand over the traditional analysis in that it (1) uses the appropriate distribution stipulated in the literature; (2) deals with the complete hierarchical nature; and (3) uses all information instead of summary measures. The fit of the model to the comet assay is compared against the background of more conventional model fits. Results indicate the toxicity of 1,2-dimethylhydrazine dihydrochloride at different dose levels (low, medium, and high).
Dezman, B; Trninić, S; Dizdar, D
2001-06-01
The purpose of the research was to empirically verify the expert model system designed for more efficient orientation of basketball players to particular positions and /or roles in the game (specialization). Participants were 60 randomly chosen male basketball players (12 players per each position) from the 12 Croatian 1st league teams in season 1998/99. Data were gathered from 10 basketball coaches who estimated overall performance (actual quality) of players on defense (7 variables) and on offense (12 variables). Variables were established by Trninić, Perica and Dizdar. A measure of body height was added to the aforementioned group of variables. The results obtained suggest that the proposed decision-making system can be used as an auxiliary instrument in orienting players to the positions and roles in the game. It has been established that the players have attained the highest grades of overall performance exactly at their primary playing positions in the game. The largest differences were determined between point guards (position 1) and centers (position 5). The greatest difficulties have occurred in determining optimal position for small forwards (position 3), then for shooting guards (position 2) and, last, for power forwards (position 4), because all these basketball players are the most versatile ones. Therefore, reliability of the system is the lowest when it is applied for selecting and orientating players to these positions. Convenient body height significantly contributes to aptitude of these players to play multiple positions and to assume multiple roles in the game. This research has reinforced the thesis that body height is a variable with the greatest influence on orientation of players to particular positions and roles in the game.
Ozone and sulfur dioxide effects on three tall fescue cultivars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flagler, R.B.; Youngner, V.B.
Although many reports have been published concerning differential susceptibility of various crops and/or cultivars to air pollutants, most have used foliar injury instead of the marketable yield as the factor that determined susceptibility for the crop. In an examination of screening in terms of marketable yield, three cultivars of tall fescue (Festuca arundinacea Schreb.), 'Alta,' 'Fawn,' and 'Kentucky 31,' were exposed to 0-0.40 ppm O/sub 3/ or 0-0.50 ppm SO/sub 2/ 6 h/d, once a week, for 7 and 9 weeks, respectively. Experimental design was a randomized complete block with three replications. Statistical analysis was by standard analysis of variancemore » and regression techniques. Three variables were analyzed: top dry weight (yield), tiller number, and weight per tiller. Ozone had a significant effect on all three variables. Significant linear decreases in yield and weight per tiller occurred with increasing O/sub 3/ concentrations. Linear regressions of these variables on O/sub 3/ concentration produced significantly different regression coefficients. The coefficient for Kentucky 31 was significantly greater than Alta or Fawn, which did not differ from each other. This indicated that Kentucky 31 was more susceptible to O/sub 3/ than either of the other cultivars. Percent reductions in dry weight for the three cultivars at highest O/sub 3/ level were 35, 44, and 53%, respectively, for Fawn, Alta, and Kentucky 31. For weight per tiller, Kentucky 31 had a higher percent reduction than the other cultivars (59 vs. 46 and 44%). Tiller number was generally increased by O/sub 3/, but this variable was not useful for determining differential susceptibility to the pollutant. Sulfur dioxide treatments produced no significant effects on any of the variables analyzed.« less
Variation in perioperative care across centers for infants undergoing the Norwood procedure.
Pasquali, Sara K; Ohye, Richard G; Lu, Minmin; Kaltman, Jonathan; Caldarone, Christopher A; Pizarro, Christian; Dunbar-Masterson, Carolyn; Gaynor, J William; Jacobs, Jeffrey P; Kaza, Aditya K; Newburger, Jane; Rhodes, John F; Scheurer, Mark; Silver, Eric; Sleeper, Lynn A; Tabbutt, Sarah; Tweddell, James; Uzark, Karen; Wells, Winfield; Mahle, William T; Pearson, Gail D
2012-10-01
In the Single Ventricle Reconstruction trial, infants undergoing the Norwood procedure were randomly allocated to undergo a right ventricle-to-pulmonary artery shunt or a modified Blalock-Taussig shunt. Apart from shunt type, subjects received the local standard of care. We evaluated variation in perioperative care during the Norwood hospitalization across 14 trial sites. Data on preoperative, operative, and postoperative variables for 546 enrolled subjects who underwent the Norwood procedure were collected prospectively on standardized case report forms, and variation across the centers was described. Gestational age, birth weight, and proportion with hypoplastic left heart syndrome were similar across sites. In contrast, all recorded variables related to preoperative care varied across centers, including fetal diagnosis (range, 55%-85%), preoperative intubation (range, 29%-91%), and enteral feeding. Perioperative and operative factors were also variable across sites, including median total support time (range, 74-189 minutes) and other perfusion variables, arch reconstruction technique, intraoperative medication use, and use of modified ultrafiltration (range, 48%-100%). Additional variation across centers was seen in variables related to postoperative care, including proportion with an open sternum (range, 35%-100%), median intensive care unit stay (range, 9-44 days), type of feeding at discharge, and enrollment in a home monitoring program (range, 1%-100%; 5 sites did not have a program). Overall, in-hospital death or transplant occurred in 18% (range across sites, 7%-39%). Perioperative care during the Norwood hospitalization varies across centers. Further analysis evaluating the underlying causes and relationship of this variation to outcome is needed to inform future studies and quality improvement efforts. Copyright © 2012 The American Association for Thoracic Surgery. All rights reserved.
Plass-Johnson, Jeremiah G; Taylor, Marc H; Husain, Aidah A A; Teichberg, Mirta C; Ferse, Sebastian C A
2016-01-01
Changes in the coral reef complex can affect predator-prey relationships, resource availability and niche utilisation in the associated fish community, which may be reflected in decreased stability of the functional traits present in a community. This is because particular traits may be favoured by a changing environment, or by habitat degradation. Furthermore, other traits can be selected against because degradation can relax the association between fishes and benthic habitat. We characterised six important ecological traits for fish species occurring at seven sites across a disturbed coral reef archipelago in Indonesia, where reefs have been exposed to eutrophication and destructive fishing practices for decades. Functional diversity was assessed using two complementary indices (FRic and RaoQ) and correlated to important environmental factors (live coral cover and rugosity, representing local reef health, and distance from shore, representing a cross-shelf environmental gradient). Indices were examined for both a change in their mean, as well as temporal (short-term; hours) and spatial (cross-shelf) variability, to assess whether fish-habitat association became relaxed along with habitat degradation. Furthermore, variability in individual traits was examined to identify the traits that are most affected by habitat change. Increases in the general reef health indicators, live coral cover and rugosity (correlated with distance from the mainland), were associated with decreases in the variability of functional diversity and with community-level changes in the abundance of several traits (notably home range size, maximum length, microalgae, detritus and small invertebrate feeding and reproductive turnover). A decrease in coral cover increased variability of RaoQ while rugosity and distance both inversely affected variability of FRic; however, averages for these indices did not reveal patterns associated with the environment. These results suggest that increased degradation of coral reefs is associated with increased variability in fish community functional composition resulting from selective impacts on specific traits, thereby affecting the functional response of these communities to increasing perturbations.
Plass-Johnson, Jeremiah G.; Taylor, Marc H.; Husain, Aidah A. A.; Teichberg, Mirta C.; Ferse, Sebastian C. A.
2016-01-01
Changes in the coral reef complex can affect predator-prey relationships, resource availability and niche utilisation in the associated fish community, which may be reflected in decreased stability of the functional traits present in a community. This is because particular traits may be favoured by a changing environment, or by habitat degradation. Furthermore, other traits can be selected against because degradation can relax the association between fishes and benthic habitat. We characterised six important ecological traits for fish species occurring at seven sites across a disturbed coral reef archipelago in Indonesia, where reefs have been exposed to eutrophication and destructive fishing practices for decades. Functional diversity was assessed using two complementary indices (FRic and RaoQ) and correlated to important environmental factors (live coral cover and rugosity, representing local reef health, and distance from shore, representing a cross-shelf environmental gradient). Indices were examined for both a change in their mean, as well as temporal (short-term; hours) and spatial (cross-shelf) variability, to assess whether fish-habitat association became relaxed along with habitat degradation. Furthermore, variability in individual traits was examined to identify the traits that are most affected by habitat change. Increases in the general reef health indicators, live coral cover and rugosity (correlated with distance from the mainland), were associated with decreases in the variability of functional diversity and with community-level changes in the abundance of several traits (notably home range size, maximum length, microalgae, detritus and small invertebrate feeding and reproductive turnover). A decrease in coral cover increased variability of RaoQ while rugosity and distance both inversely affected variability of FRic; however, averages for these indices did not reveal patterns associated with the environment. These results suggest that increased degradation of coral reefs is associated with increased variability in fish community functional composition resulting from selective impacts on specific traits, thereby affecting the functional response of these communities to increasing perturbations. PMID:27100189
Borchers, M R; Chang, Y M; Proudfoot, K L; Wadsworth, B A; Stone, A E; Bewley, J M
2017-07-01
The objective of this study was to use automated activity, lying, and rumination monitors to characterize prepartum behavior and predict calving in dairy cattle. Data were collected from 20 primiparous and 33 multiparous Holstein dairy cattle from September 2011 to May 2013 at the University of Kentucky Coldstream Dairy. The HR Tag (SCR Engineers Ltd., Netanya, Israel) automatically collected neck activity and rumination data in 2-h increments. The IceQube (IceRobotics Ltd., South Queensferry, United Kingdom) automatically collected number of steps, lying time, standing time, number of transitions from standing to lying (lying bouts), and total motion, summed in 15-min increments. IceQube data were summed in 2-h increments to match HR Tag data. All behavioral data were collected for 14 d before the predicted calving date. Retrospective data analysis was performed using mixed linear models to examine behavioral changes by day in the 14 d before calving. Bihourly behavioral differences from baseline values over the 14 d before calving were also evaluated using mixed linear models. Changes in daily rumination time, total motion, lying time, and lying bouts occurred in the 14 d before calving. In the bihourly analysis, extreme values for all behaviors occurred in the final 24 h, indicating that the monitored behaviors may be useful in calving prediction. To determine whether technologies were useful at predicting calving, random forest, linear discriminant analysis, and neural network machine-learning techniques were constructed and implemented using R version 3.1.0 (R Foundation for Statistical Computing, Vienna, Austria). These methods were used on variables from each technology and all combined variables from both technologies. A neural network analysis that combined variables from both technologies at the daily level yielded 100.0% sensitivity and 86.8% specificity. A neural network analysis that combined variables from both technologies in bihourly increments was used to identify 2-h periods in the 8 h before calving with 82.8% sensitivity and 80.4% specificity. Changes in behavior and machine-learning alerts indicate that commercially marketed behavioral monitors may have calving prediction potential. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Bayesian dynamic modeling of time series of dengue disease case counts
López-Quílez, Antonio; Torres-Prieto, Alexander
2017-01-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941
On the Wigner law in dilute random matrices
NASA Astrophysics Data System (ADS)
Khorunzhy, A.; Rodgers, G. J.
1998-12-01
We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.
Baseline-dependent effect of noise-enhanced insoles on gait variability in healthy elderly walkers.
Stephen, Damian G; Wilcox, Bethany J; Niemi, James B; Franz, Jason R; Franz, Jason; Kerrigan, Dr; Kerrigan, D Casey; D'Andrea, Susan E
2012-07-01
The purpose of this study was to determine whether providing subsensory stochastic-resonance mechanical vibration to the foot soles of elderly walkers could decrease gait variability. In a randomized double-blind controlled trial, 29 subjects engaged in treadmill walking while wearing sandals customized with three actuators capable of producing stochastic-resonance mechanical vibration embedded in each sole. For each subject, we determined a subsensory level of vibration stimulation. After a 5-min acclimation period of walking with the footwear, subjects were asked to walk on the treadmill for six trials, each 30s long. Trials were pair-wise random: in three trials, actuators provided subsensory vibration; in the other trials, they did not. Subjects wore reflective markers to track body motion. Stochastic-resonance mechanical stimulation exhibited baseline-dependent effects on spatial stride-to-stride variability in gait, slightly increasing variability in subjects with least baseline variability and providing greater reductions in variability for subjects with greater baseline variability (p<.001). Thus, applying stochastic-resonance mechanical vibrations on the plantar surface of the foot reduces gait variability for subjects with more variable gait. Stochastic-resonance mechanical vibrations may provide an effective intervention for preventing falls in healthy elderly walkers. Published by Elsevier B.V.
Kroll, M; Ring, C; Gaus, W; Hempel, B
2005-06-01
In a randomized, double-blind, placebo-controlled, parallel group, phase III clinical trial efficacy and safety of Korodin, a combination of natural D-camphor and an extract from fresh crataegus berries, was investigated in patients 50 years and older with orthostatic hypotension. At visit 1 eligibility of patients was checked and a placebo medication was given to all patients. At visit 2 orthostatic hypotension had to be reconfirmed, then the patient was randomized either to Korodin or placebo, study medication (25 drops) was applied once and then outcome was measured. After 7 days of home treatment with daily 3 x 25 drops outcome was measured at visit 3. Systolic blood pressure (SBP), diastolic blood pressure (DBP) and heart rate (HR) were documented 10, 5, 2 and 0 min before as well as 1, 3, 5, 8, and 10 min after getting in the upright position at visit 1, at visit 2 before and after application of study medication and at visit 3. Primary outcome was the change of mean arterial blood pressure (MAP) from just before standing up to the nadir within the first 3 min after standing up. Secondary outcome variables were SBP, DBP, HR, quality of life (SF-12) and seven typical signs and symptoms of orthostatic hypotension. The study was performed in a rehabilitation clinic and in two doctor's practices in Germany from November 2002 to May 2003. During this time, 57 patients were admitted to the study, 39 patients were eligible and randomized, 38 patients were treated according to protocol and evaluated, 21 patients with Korodin and 17 patients with placebo. After a single application the median decrease of MAP was 11.4 mmHg for Korodin and 14.0 mmHg for placebo. Compared to baseline, the median MAP improved 4.3 mmHg for Korodin and 0.3 mmHg for placebo. After 1 week of treatment the decrease of median MAP after standing up was 9.3 mmHg for Korodin and 13.3 mmHg for placebo. Compared to baseline, the improvement was 5.9 mmHg for Korodin and 1.6 mmHg for placebo. Efficacy of 1 week treatment was significant. For the single application a superiority of Korodin over placebo was seen; however, it was not significant. All secondary outcome variables confirmed these findings, except for the physical summary score in the quality of life evaluation (SF-12 questionnaire). Only one adverse event occurred, but this was not serious and without relationship to the study medication. The other safety variables (SBP, DBP, HR, ECG, physical examination) did not show any problems. This study demonstrates that Korodin is efficacious for orthostatic hypotension in patients over 50 years.
Bernardi, Julia Laura Delbue; Jordão, Regina Esteves; Barros Filho, Antônio de Azevedo
2009-07-01
Increasing obesity is starting to occur among Brazilians. The aim of this study was to investigate the weight and length of children under two years of age in relation to sociodemographic variables and according to whether they were breastfed. Cross-sectional randomized study conducted in 2004-2005, based on the declaration of live births (SINASC) in Campinas, Brazil. 2,857 mothers of newborns were interviewed and answered a questionnaire seeking socioeconomic and breastfeeding information. The newborns' weights and lengths were measured at the end of the interviews and the body mass index was calculated. Percentiles (< 15 and > 85) and Z-scores (< -1 and > +1) were used for classification based on the new growth charts recommended by WHO (2006). The log-rank test, multiple linear regression and binomial test (Z) were used. The statistical significance level used was 5%. The predominant social level was class C. The median for exclusive breastfeeding was 90 days; 61.25% of the children were between P15 and P85 for body mass index and 61.12% for length, respectively. Children whose mothers studied for nine to eleven years and children whose mothers were unemployed presented lower weight. Children whose mothers worked in health-related professions presented lower length when correlated with breastfeeding. The breastfeeding, maternal schooling and maternal occupation levels had an influence on nutrition status and indicated that obesity is occurring in early childhood among the infants living in the municipality.
Armour, Carol L; Reddel, Helen K; LeMay, Kate S; Saini, Bandana; Smith, Lorraine D; Bosnic-Anticevich, Sinthia Z; Song, Yun Ju Christine; Alles, M Chehani; Burton, Deborah L; Emmerton, Lynne; Stewart, Kay; Krass, Ines
2013-04-01
To test the feasibility, effectiveness, and sustainability of a pharmacy asthma service in primary care. A pragmatic cluster randomized trial in community pharmacies in four Australian states/territories in 2009. Specially trained pharmacists were randomized to deliver an asthma service in two groups, providing three versus four consultations over 6 months. People with poorly controlled asthma or no recent asthma review were included. Follow-up for 12 months after service completion occurred in 30% of randomly selected completing patients. Outcomes included change in asthma control (poor and fair/good) and Asthma Control Questionnaire (ACQ) score, inhaler technique, quality of life, perceived control, adherence, asthma knowledge, and asthma action plan ownership. Ninety-six pharmacists enrolled 570 patients, with 398 (70%) completing. Asthma control significantly improved with both the three- and four-visit service, with no significant difference between groups (good/fair control 29% and 21% at baseline, 61% and 59% at end, p = .791). Significant improvements were also evident in the ACQ (mean change 0.56), inhaler technique (17-33% correct baseline, 57-72% end), asthma action plan ownership (19% baseline, 56% end), quality of life, adherence, perceived control, and asthma knowledge, with no significant difference between groups for any variable. Outcomes were sustained at 12 months post-service. The pharmacy asthma service delivered clinically important improvements in both a three-visit and four-visit service. Pharmacists were able to recruit and deliver the service with minimal intervention, suggesting it is practical to implement in practice. The three-visit service would be feasible and effective to implement, with a review at 12 months.
Rezende, Márcia; Chemin, Kaprice; Vaez, Savil Costa; Peixoto, Aline Carvalho; Rabelo, Jéssica de Freitas; Braga, Stella Sueli Lourenço; Faria-E-Silva, André Luis; Silva, Gisele Rodrigues da; Soares, Carlos José; Loguercio, Alessandro D; Reis, Alessandra
2018-05-01
Tooth sensitivity commonly occurs during and immediately after dental bleaching. The authors conducted a trial to compare tooth sensitivity after in-office bleaching after the use of either a topical dipyrone or placebo gel. A split-mouth, triple-blind, randomized, multicenter clinical trial was conducted among 120 healthy adults having teeth that were shade A2 or darker. The facial tooth surfaces of the right or left sides of the maxillary arch of each patient were randomly assigned to receive either topical dipyrone or placebo gel before 2 in-office bleaching sessions (35% hydrogen peroxide) separated by 2 weeks. Visual analog and numerical rating scales were used to record tooth sensitivity during and up to 48 hours after bleaching. Tooth color change from baseline to 1 month after bleaching was measured with shade guide and spectrophotometer measures. The primary outcome variable was absolute risk of tooth sensitivity. An intention-to-treat analysis was used to analyze data from all patients who were randomly assigned to receive the dipyrone and placebo gels. No statically significant difference was found in the absolute risk of tooth sensitivity between the dipyrone and placebo gels (83% and 90%, respectively, P = .09; relative risk, 0.92; 95% confidence interval, 0.8 to 1.0). A whitening effect was observed in both groups with no statistically significant difference (P > .05) between them. No adverse effects were observed. Topical use of dipyrone gel before tooth bleaching, at the levels used in this study, did not reduce the risk or intensity of bleaching-induced tooth sensitivity. Topical application of dipyrone gel does not reduce bleaching-induced tooth sensitivity. Copyright © 2018 American Dental Association. Published by Elsevier Inc. All rights reserved.
Geiser, Christian; Griffin, Daniel; Shiffman, Saul
2016-01-01
Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT.
Geiser, Christian; Griffin, Daniel; Shiffman, Saul
2016-01-01
Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT. PMID:27499744
NASA Astrophysics Data System (ADS)
Maekawa, Keiichi; Makiyama, Hideki; Yamamoto, Yoshiki; Hasegawa, Takumi; Okanishi, Shinobu; Sonoda, Kenichiro; Shinkawata, Hiroki; Yamashita, Tomohiro; Kamohara, Shiro; Yamaguchi, Yasuo
2018-04-01
The low-frequency noise (LFN) variability in bulk and fully depleted silicon-on-insulator (FDSOI) metal–oxide–semiconductor field-effect transistor (MOSFET) with silicon on thin box (SOTB) technology was investigated. LFN typically shows a flicker noise component and a signal Lorentzian component by random telegraph noise (RTN). At a weak inversion state, the random dopant fluctuation (RDF) in a channel is strongly affected to not only RTN variability but also flicker noise variability in the bulk MOSFET compared with SOTB MOSFET because of local carrier number fluctuation in the channel. On the other hand, the typical level of LFN in SOTB MOSFET is slightly larger than that in the bulk MOSFET because of an additional interface on the buried oxide layer. However, considering the tailing characteristics of LFN variability, LFN in SOTB MOSFET can be assumed to be smaller than that in the bulk MOSFET, which enables the low-voltage operation of analog circuits.
Approximating prediction uncertainty for random forest regression models
John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne
2016-01-01
Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...
Complete convergence of randomly weighted END sequences and its application.
Li, Penghua; Li, Xiaoqin; Wu, Kehan
2017-01-01
We investigate the complete convergence of partial sums of randomly weighted extended negatively dependent (END) random variables. Some results of complete moment convergence, complete convergence and the strong law of large numbers for this dependent structure are obtained. As an application, we study the convergence of the state observers of linear-time-invariant systems. Our results extend the corresponding earlier ones.
Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory
NASA Astrophysics Data System (ADS)
Pato, Mauricio P.; Oshanin, Gleb
2013-03-01
We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.
ERIC Educational Resources Information Center
Olson, Jeffery E.
Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…
The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model
Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.
2016-01-01
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903
Measurement effects of seasonal and monthly variability on pedometer-determined data.
Kang, Minsoo; Bassett, David R; Barreira, Tiago V; Tudor-Locke, Catrine; Ainsworth, Barbara E
2012-03-01
The seasonal and monthly variability of pedometer-determined physical activity and its effects on accurate measurement have not been examined. The purpose of the study was to reduce measurement error in step-count data by controlling a) the length of the measurement period and b) the season or month of the year in which sampling was conducted. Twenty-three middle-aged adults were instructed to wear a Yamax SW-200 pedometer over 365 consecutive days. The step-count measurement periods of various lengths (eg, 2, 3, 4, 5, 6, 7 days, etc.) were randomly selected 10 times for each season and month. To determine accurate estimates of yearly step-count measurement, mean absolute percentage error (MAPE) and bias were calculated. The year-round average was considered as a criterion measure. A smaller MAPE and bias represent a better estimate. Differences in MAPE and bias among seasons were trivial; however, they varied among different months. The months in which seasonal changes occur presented the highest MAPE and bias. Targeting the data collection during certain months (eg, May) may reduce pedometer measurement error and provide more accurate estimates of year-round averages.
Perceived Cost and Intrinsic Motor Variability Modulate the Speed-Accuracy Trade-Off
Bertucco, Matteo; Bhanpuri, Nasir H.; Sanger, Terence D.
2015-01-01
Fitts’ Law describes the speed-accuracy trade-off of human movements, and it is an elegant strategy that compensates for random and uncontrollable noise in the motor system. The control strategy during targeted movements may also take into account the rewards or costs of any outcomes that may occur. The aim of this study was to test the hypothesis that movement time in Fitts’ Law emerges not only from the accuracy constraints of the task, but also depends on the perceived cost of error for missing the targets. Subjects were asked to touch targets on an iPad® screen with different costs for missed targets. We manipulated the probability of error by comparing children with dystonia (who are characterized by increased intrinsic motor variability) to typically developing children. The results show a strong effect of the cost of error on the Fitts’ Law relationship characterized by an increase in movement time as cost increased. In addition, we observed a greater sensitivity to increased cost for children with dystonia, and this behavior appears to minimize the average cost. The findings support a proposed mathematical model that explains how movement time in a Fitts-like task is related to perceived risk. PMID:26447874
Radionuclide Gas Transport through Nuclear Explosion-Generated Fracture Networks
Jordan, Amy B.; Stauffer, Philip H.; Knight, Earl E.; Rougier, Esteban; Anderson, Dale N.
2015-01-01
Underground nuclear weapon testing produces radionuclide gases which may seep to the surface. Barometric pumping of gas through explosion-fractured rock is investigated using a new sequentially-coupled hydrodynamic rock damage/gas transport model. Fracture networks are produced for two rock types (granite and tuff) and three depths of burial. The fracture networks are integrated into a flow and transport numerical model driven by surface pressure signals of differing amplitude and variability. There are major differences between predictions using a realistic fracture network and prior results that used a simplified geometry. Matrix porosity and maximum fracture aperture have the greatest impact on gas breakthrough time and window of opportunity for detection, with different effects between granite and tuff simulations highlighting the importance of accurately simulating the fracture network. In particular, maximum fracture aperture has an opposite effect on tuff and granite, due to different damage patterns and their effect on the barometric pumping process. From stochastic simulations using randomly generated hydrogeologic parameters, normalized detection curves are presented to show differences in optimal sampling time for granite and tuff simulations. Seasonal and location-based effects on breakthrough, which occur due to differences in barometric forcing, are stronger where the barometric signal is highly variable. PMID:26676058
Radionuclide Gas Transport through Nuclear Explosion-Generated Fracture Networks.
Jordan, Amy B; Stauffer, Philip H; Knight, Earl E; Rougier, Esteban; Anderson, Dale N
2015-12-17
Underground nuclear weapon testing produces radionuclide gases which may seep to the surface. Barometric pumping of gas through explosion-fractured rock is investigated using a new sequentially-coupled hydrodynamic rock damage/gas transport model. Fracture networks are produced for two rock types (granite and tuff) and three depths of burial. The fracture networks are integrated into a flow and transport numerical model driven by surface pressure signals of differing amplitude and variability. There are major differences between predictions using a realistic fracture network and prior results that used a simplified geometry. Matrix porosity and maximum fracture aperture have the greatest impact on gas breakthrough time and window of opportunity for detection, with different effects between granite and tuff simulations highlighting the importance of accurately simulating the fracture network. In particular, maximum fracture aperture has an opposite effect on tuff and granite, due to different damage patterns and their effect on the barometric pumping process. From stochastic simulations using randomly generated hydrogeologic parameters, normalized detection curves are presented to show differences in optimal sampling time for granite and tuff simulations. Seasonal and location-based effects on breakthrough, which occur due to differences in barometric forcing, are stronger where the barometric signal is highly variable.
Plan-provider integration, premiums, and quality in the Medicare Advantage market.
Frakt, Austin B; Pizer, Steven D; Feldman, Roger
2013-12-01
To investigate how integration between Medicare Advantage plans and health care providers is related to plan premiums and quality ratings. We used public data from the Centers for Medicare and Medicaid Services (CMS) and the Area Resource File and private data from one large insurer. Premiums and quality ratings are from 2009 CMS administrative files and some control variables are historical. We estimated ordinary least-squares models for premiums and plan quality ratings, with state fixed effects and firm random effects. The key independent variable was an indicator of plan-provider integration. With the exception of Medigap premium data, all data were publicly available. We ascertained plan-provider integration through examination of plans' websites and governance documents. We found that integrated plan-providers charge higher premiums, controlling for quality. Such plans also have higher quality ratings. We found no evidence that integration is associated with more generous benefits. Current policy encourages plan-provider integration, although potential effects on health insurance products and markets are uncertain. Policy makers and regulators may want to closely monitor changes in premiums and quality after integration and consider whether quality improvement (if any) justifies premium increases (if they occur). © Health Research and Educational Trust.
Assessing patient risk of central line-associated bacteremia via machine learning.
Beeler, Cole; Dbeibo, Lana; Kelley, Kristen; Thatcher, Levi; Webb, Douglas; Bah, Amadou; Monahan, Patrick; Fowler, Nicole R; Nicol, Spencer; Judy-Malcolm, Alisa; Azar, Jose
2018-04-13
Central line-associated bloodstream infections (CLABSIs) contribute to increased morbidity, length of hospital stay, and cost. Despite progress in understanding the risk factors, there remains a need to accurately predict the risk of CLABSIs and, in real time, prevent them from occurring. A predictive model was developed using retrospective data from a large academic healthcare system. Models were developed with machine learning via construction of random forests using validated input variables. Fifteen variables accounted for the most significant effect on CLABSI prediction based on a retrospective study of 70,218 unique patient encounters between January 1, 2013, and May 31, 2016. The area under the receiver operating characteristic curve for the best-performing model was 0.82 in production. This model has multiple applications for resource allocation for CLABSI prevention, including serving as a tool to target patients at highest risk for potentially cost-effective but otherwise time-limited interventions. Machine learning can be used to develop accurate models to predict the risk of CLABSI in real time prior to the development of infection. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Recharge characteristics of an unconfined aquifer from the rainfall-water table relationship
NASA Astrophysics Data System (ADS)
Viswanathan, M. N.
1984-02-01
The determination of recharge levels of unconfined aquifers, recharged entirely by rainfall, is done by developing a model for the aquifer that estimates the water-table levels from the history of rainfall observations and past water-table levels. In the present analysis, the model parameters that influence the recharge were not only assumed to be time dependent but also to have varying dependence rates for various parameters. Such a model is solved by the use of a recursive least-squares method. The variable-rate parameter variation is incorporated using a random walk model. From the field tests conducted at Tomago Sandbeds, Newcastle, Australia, it was observed that the assumption of variable rates of time dependency of recharge parameters produced better estimates of water-table levels compared to that with constant-recharge parameters. It was observed that considerable recharge due to rainfall occurred on the very same day of rainfall. The increase in water-table level was insignificant for subsequent days of rainfall. The level of recharge very much depends upon the intensity and history of rainfall. Isolated rainfalls, even of the order of 25 mm day -1, had no significant effect on the water-table levels.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-13
... DEPARTMENT OF COMMERCE International Trade Administration [C-580-851] Dynamic Random Access Memory... administrative review of the countervailing duty order on dynamic random access memory semiconductors from the... following events have occurred since the publication of the preliminary results of this review. See Dynamic...
Reduction of randomness in seismic noise as a short-term precursor to a volcanic eruption.
Glynn, C C; Konstantinou, K I
2016-11-24
Ambient seismic noise is characterized by randomness incurred by the random position and strength of the noise sources as well as the heterogeneous properties of the medium through which it propagates. Here we use ambient noise data recorded prior to the 1996 Gjálp eruption in Iceland in order to show that a reduction of noise randomness can be a clear short-term precursor to volcanic activity. The eruption was preceded on 29 September 1996 by a Mw ~5.6 earthquake that occurred in the caldera rim of the Bárdarbunga volcano. A significant reduction of randomness started occurring 8 days before the earthquake and 10 days before the onset of the eruption. This reduction was observed even at stations more than 100 km away from the eruption site. Randomness increased to its previous levels 160 minutes after the Bárdarbunga earthquake, during which time aftershocks migrated from the Bárdarbunga caldera to a site near the Gjálp eruption fissure. We attribute this precursory reduction of randomness to the lack of higher frequencies (>1 Hz) in the noise wavefield caused by high absorption losses as hot magma ascended in the upper crust.
Reduction of randomness in seismic noise as a short-term precursor to a volcanic eruption
Glynn, C. C.; Konstantinou, K. I.
2016-01-01
Ambient seismic noise is characterized by randomness incurred by the random position and strength of the noise sources as well as the heterogeneous properties of the medium through which it propagates. Here we use ambient noise data recorded prior to the 1996 Gjálp eruption in Iceland in order to show that a reduction of noise randomness can be a clear short-term precursor to volcanic activity. The eruption was preceded on 29 September 1996 by a Mw ~5.6 earthquake that occurred in the caldera rim of the Bárdarbunga volcano. A significant reduction of randomness started occurring 8 days before the earthquake and 10 days before the onset of the eruption. This reduction was observed even at stations more than 100 km away from the eruption site. Randomness increased to its previous levels 160 minutes after the Bárdarbunga earthquake, during which time aftershocks migrated from the Bárdarbunga caldera to a site near the Gjálp eruption fissure. We attribute this precursory reduction of randomness to the lack of higher frequencies (>1 Hz) in the noise wavefield caused by high absorption losses as hot magma ascended in the upper crust. PMID:27883050
A GIS modeling method applied to predicting forest songbird habitat
Dettmers, Randy; Bart, Jonathan
1999-01-01
We have developed an approach for using a??presencea?? data to construct habitat models. Presence data are those that indicate locations where the target organism is observed to occur, but that cannot be used to define locations where the organism does not occur. Surveys of highly mobile vertebrates often yield these kinds of data. Models developed through our approach yield predictions of the amount and the spatial distribution of good-quality habitat for the target species. This approach was developed primarily for use in a GIS context; thus, the models are spatially explicit and have the potential to be applied over large areas. Our method consists of two primary steps. In the first step, we identify an optimal range of values for each habitat variable to be used as a predictor in the model. To find these ranges, we employ the concept of maximizing the difference between cumulative distribution functions of (1) the values of a habitat variable at the observed presence locations of the target organism, and (2) the values of that habitat variable for all locations across a study area. In the second step, multivariate models of good habitat are constructed by combining these ranges of values, using the Boolean operators a??anda?? and a??or.a?? We use an approach similar to forward stepwise regression to select the best overall model. We demonstrate the use of this method by developing species-specific habitat models for nine forest-breeding songbirds (e.g., Cerulean Warbler, Scarlet Tanager, Wood Thrush) studied in southern Ohio. These models are based on speciesa?? microhabitat preferences for moisture and vegetation characteristics that can be predicted primarily through the use of abiotic variables. We use slope, land surface morphology, land surface curvature, water flow accumulation downhill, and an integrated moisture index, in conjunction with a land-cover classification that identifies forest/nonforest, to develop these models. The performance of these models was evaluated with an independent data set. Our tests showed that the models performed better than random at identifying where the birds occurred and provided useful information for predicting the amount and spatial distribution of good habitat for the birds we studied. In addition, we generally found positive correlations between the amount of habitat, as predicted by the models, and the number of territories within a given area. This added component provides the possibility, ultimately, of being able to estimate population sizes. Our models represent useful tools for resource managers who are interested in assessing the impacts of alternative management plans that could alter or remove habitat for these birds.
Pinedo, P J; Fleming, C; Risco, C A
2012-12-01
The objective of this study was to investigate the association between mastitis events occurring during the previous lactation, the dry period, and the peripartum period on the incidence of early lactation mastitis in cows receiving ceftiofur hydrochloride or penicillin dihydrostreptomycin as intramammary dry cow antibiotic therapy. Cows (n=402) from 2 large dairy farms in Central Florida were enrolled in the study at the time of dry-off processing and were randomly assigned to 1 of 2 dry cow therapies: ceftiofur hydrochloride or penicillin dihydrostreptomycin. Composite milk samples were collected at dry-off and after calving for bacteriological examination and somatic cell count. Peripartal health disorders were monitored during the first 30 d of lactation and included calving difficulty, metritis, ketosis, and left displaced abomasum. Milk production and individual somatic cell scores (SCS) were recorded monthly by the Dairy Herd Improvement Association. The main outcome variables were the risk of clinical mastitis during the first 30 and 60 d of lactation, and the risk of subclinical mastitis at the first 2 monthly Dairy Herd Improvement Association tests after calving (up to 70 d in milk). Additionally, the SCS and the presence of mastitis pathogens in milk at dry-off and at calving were analyzed. Explanatory variables consisted of events occurring during the previous lactation, at dry-off and during the dry period, at calving, and within the first 30 d after calving. Multiple events occurring during the previous lactation had a significant effect on the incidence of mastitis in the subsequent lactation. These events included low milk yield, intermediate lactation length, clinical mastitis, and lactation SCS average. Similarly, intramammary infections with environmental bacteria at dry-off increased the chances of clinical mastitis the first month after calving. Dry-off therapy had a significant effect on mastitis incidence; cows treated with ceftiofur hydrochloride had lower odds of having clinical and subclinical mastitis in the subsequent early lactation compared with cows treated with penicillin dihydrostreptomycin. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Nakashima, Shinichi; Ohishi, Akira; Takada, Fumio; Kawamura, Hideki; Igarashi, Maki; Fukami, Maki; Ogata, Tsutomu
2014-10-01
We report four patients with SRY-positive 46,XX testicular disorders of sex development (46,XX-TDSD) (cases 1-4). Case 1 exhibited underdeveloped external genitalia with hypospadias, case 2 manifested micropenis and cases 3 and 4 showed normal external genitalia. The Xp;Yp translocations occurred between the X- and the Y-differential regions in case 1, between PRKX and inverted PRKY in case 2 and between the X-chromosomal short arm pseudoautosomal region and the Y-differential regions in cases 3 and 4. The distance of the Yp breakpoint from SRY was ~0.75 Mb in case 1, ~6.5 Mb in case 2, ~2.3 Mb in case 3 and ~72 kb in case 4. The Xp;Yp translocation occurred within an 87-bp homologous segment of PRKX and PRKY in case 2, and between non-homologous regions with addition of an 18-bp sequence of unknown origin in case 4. X-inactivation analysis revealed random inactivation in cases 1-4. The results argue against the notion that undermasculinization in 46,XX-TDSD is prone to occur when translocated Yp materials are small (<100 kb of the Y-differential region), and imply that the Xp;Yp translocations result from several mechanisms including non-allelic homologous recombination and non-homologous end joining.
Yanos, Philip T; Vayshenker, Beth; Pleskach, Pavel; Mueser, Kim T
2016-07-01
There is a dearth of research on what factors are predictive of insight among people with severe mental illness and co-occurring PTSD. Data were drawn from 146 participants with severe mental illness, co-occurring PTSD and elevated psychotic symptoms participating in a randomized controlled trial comparing two interventions for PTSD among people with severe mental illness. We examined the clinical and demographic correlates of insight at baseline, the relationship between baseline insight and treatment participation, the relationship between treatment participation and post-treatment insight, and the relationship between change in insight and change in other clinical variables. Impaired insight was relatively common, with roughly half the sample demonstrating mild or moderate impairment at baseline. Baseline insight was associated with fewer psychotic and disorganized symptoms, and greater emotional discomfort and PTSD knowledge, but was not associated with negative symptoms, PTSD symptoms, depression/anxiety, or treatment participation. Participation in PTSD treatment was associated with increased insight at post-treatment. Improved insight was associated with improvements in disorganization and negative symptoms, but not with knowledge of PTSD or positive symptoms. The findings suggest that engagement in treatment that includes educating people about PTSD may lead to improvements in insight and related improvements in other psychiatric symptoms. Copyright © 2016 Elsevier Inc. All rights reserved.
Easterly and westerly wind events in the equatorial Pacific ocean and their oceanic response
NASA Astrophysics Data System (ADS)
Puy, martin; Lengaigne, matthieu; Vialard, jerome; Guilyardi, eric
2014-05-01
Intraseasonal wind variability is known to influence the onset and evolution of the El Niño Southern Oscillation (ENSO), in particular through the occurrence of Westerly Wind Events (WWEs) in the western Equatorial Pacific. For predictability purposes, it is important to identify the large scale atmospheric controls of the occurrences of those WWEs. We hence carefully assess the link between equatorial WWEs and large-scale atmospheric waves. We find that WWEs preferably occur during convectively active phases associated to equatorial atmospheric Rossby waves (74% against 15% if the distribution was random) and to the MJO (60% against 15%). We also find that WWEs that occur in relation with those atmospheric waves tend to be stronger. The results also show that WWEs that occur in relation with the MJO tend to be longer than others, and tend to have a larger impact on SST, both on the eastern edge of the warm pool and in the eastern Pacific. We further show that the central and eastern equatorial Pacific is home to frequent easterly wind events (EWEs). These EWEs are further shown to be influenced by atmospheric Rossby waves and the MJO, but to a lesser extent than WWEs. We will discuss the potential influence of EWEs on the ENSO cycle, and propose a modeling strategy to test the influence of these EWEs / WWEs on the ENSO evolution.
Practice schedule and acquisition, retention, and transfer of a throwing task in 6-yr.-old children.
Granda Vera, Juan; Montilla, Mariano Medina
2003-06-01
Earlier studies have raised questions about the usefulness of variable and random practice in learning motor tasks so this study was designed to investigate the effects of contextual interference in young children, and specifically to evaluate the effectiveness of variable or random practice structure in 6-yr.-old boys and girls. Participants on a variable practice schedule showed better performances than those on a blocked schedule. The differences between the two groups were significant in the acquisition, retention, and transfer phases. These results support the hypothesis that contextual interference enhances skill learning. Because the study involved groups of young children in the setting of their normally scheduled physical education class, the practical applications of the results are evident.
Estimating the signal-to-noise ratio of AVIRIS data
NASA Technical Reports Server (NTRS)
Curran, Paul J.; Dungan, Jennifer L.
1988-01-01
To make the best use of narrowband airborne visible/infrared imaging spectrometer (AVIRIS) data, an investigator needs to know the ratio of signal to random variability or noise (signal-to-noise ratio or SNR). The signal is land cover dependent and varies with both wavelength and atmospheric absorption; random noise comprises sensor noise and intrapixel variability (i.e., variability within a pixel). The three existing methods for estimating the SNR are inadequate, since typical laboratory methods inflate while dark current and image methods deflate the SNR. A new procedure is proposed called the geostatistical method. It is based on the removal of periodic noise by notch filtering in the frequency domain and the isolation of sensor noise and intrapixel variability using the semi-variogram. This procedure was applied easily and successfully to five sets of AVIRIS data from the 1987 flying season and could be applied to remotely sensed data from broadband sensors.
Perception of randomness: On the time of streaks.
Sun, Yanlong; Wang, Hongbin
2010-12-01
People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the most delayed pattern for its first occurrence. It is argued that when time is of essence, how often a pattern is to occur (mean time, or, frequency) and when a pattern is to first occur (waiting time) are different questions and bear different psychological relevance. The waiting time statistics may provide a quantitative measure to the psychological distance when people are expecting a probabilistic event, and such measure is consistent with both of the representativeness and availability heuristics in people's perception of randomness. We discuss some of the recent empirical findings and suggest that people's judgment and generation of random sequences may be guided by their actual experiences of the waiting time statistics. Published by Elsevier Inc.
Massof, Robert W
2014-10-01
A simple theoretical framework explains patient responses to items in rating scale questionnaires. Fixed latent variables position each patient and each item on the same linear scale. Item responses are governed by a set of fixed category thresholds, one for each ordinal response category. A patient's item responses are magnitude estimates of the difference between the patient variable and the patient's estimate of the item variable, relative to his/her personally defined response category thresholds. Differences between patients in their personal estimates of the item variable and in their personal choices of category thresholds are represented by random variables added to the corresponding fixed variables. Effects of intervention correspond to changes in the patient variable, the patient's response bias, and/or latent item variables for a subset of items. Intervention effects on patients' item responses were simulated by assuming the random variables are normally distributed with a constant scalar covariance matrix. Rasch analysis was used to estimate latent variables from the simulated responses. The simulations demonstrate that changes in the patient variable and changes in response bias produce indistinguishable effects on item responses and manifest as changes only in the estimated patient variable. Changes in a subset of item variables manifest as intervention-specific differential item functioning and as changes in the estimated person variable that equals the average of changes in the item variables. Simulations demonstrate that intervention-specific differential item functioning produces inefficiencies and inaccuracies in computer adaptive testing. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Siegel, Jason T; Tan, Cara N; Rosenberg, Benjamin D; Navarro, Mario A; Thomson, Andrew L; Lyrintzis, Elena A; Alvaro, Eusebio M; Jones, Natalie D
2016-03-01
The IIFF Model (Information, Immediate and Complete Registration Mechanism, Focused Engagement, Favorable Activation) offers a checklist of considerations for interventions seeking to influence organ donor registration behavior. One aspect of the model, favorable activation, recommends considering the emotional and motivational state of a potential donor registrant. Given that most donor registrations occur at the Department of Motor Vehicles (DMV), we considered whether emotions experienced while at the DMV could influence registration rates. The current research effort investigated the emotions people experience while visiting the DMV, explored whether these emotions are associated with donor registration intentions, and experimentally assessed whether DMV experiences influence donor registration. Three studies were conducted through Amazon's Mechanical Turk. In Study 1, we randomly assigned participants to either recall a prior DMV experience or to a comparison condition. Emotions associated with the recalled experiences were the dependent variable. Study 2 assessed the correlations between nine different emotions and donor registration intentions. Study 3 randomly assigned participants to recall a prior frustrating DMV experience or to a comparison condition. Intention to register to donate was the dependent variable. Study 1 found that recalling a prior DMV experience was associated with more negative and less positive emotions than the comparison condition. Study 2 found that increased levels of negative emotion could be problematic, as negative emotions were associated with decreased donor intentions. Study 3 found that recalling a frustrating DMV experience resulted in significantly lower intentions to register as an organ donor (vs. a control condition). Although not all DMV experiences are negative, these data indicated a relationship between the DMV and negative emotions; an association between negative emotions and lower donor registration intentions; and, a causal relationship between negative DMV experiences and decreased registration intentions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pleiotropic Models of Polygenic Variation, Stabilizing Selection, and Epistasis
Gavrilets, S.; de-Jong, G.
1993-01-01
We show that in polymorphic populations many polygenic traits pleiotropically related to fitness are expected to be under apparent ``stabilizing selection'' independently of the real selection acting on the population. This occurs, for example, if the genetic system is at a stable polymorphic equilibrium determined by selection and the nonadditive contributions of the loci to the trait value either are absent, or are random and independent of those to fitness. Stabilizing selection is also observed if the polygenic system is at an equilibrium determined by a balance between selection and mutation (or migration) when both additive and nonadditive contributions of the loci to the trait value are random and independent of those to fitness. We also compare different viability models that can maintain genetic variability at many loci with respect to their ability to account for the strong stabilizing selection on an additive trait. Let V(m) be the genetic variance supplied by mutation (or migration) each generation, V(g) be the genotypic variance maintained in the population, and n be the number of the loci influencing fitness. We demonstrate that in mutation (migration)-selection balance models the strength of apparent stabilizing selection is order V(m)/V(g). In the overdominant model and in the symmetric viability model the strength of apparent stabilizing selection is approximately 1/(2n) that of total selection on the whole phenotype. We show that a selection system that involves pairwise additive by additive epistasis in maintaining variability can lead to a lower genetic load and genetic variance in fitness (approximately 1/(2n) times) than an equivalent selection system that involves overdominance. We show that, in the epistatic model, the apparent stabilizing selection on an additive trait can be as strong as the total selection on the whole phenotype. PMID:8325491
Modeling sediment transport as a spatio-temporal Markov process.
NASA Astrophysics Data System (ADS)
Heyman, Joris; Ancey, Christophe
2014-05-01
Despite a century of research about sediment transport by bedload occuring in rivers, its constitutive laws remain largely unknown. The proof being that our ability to predict mid-to-long term transported volumes within reasonable confidence interval is almost null. The intrinsic fluctuating nature of bedload transport may be one of the most important reasons why classical approaches fail. Microscopic probabilistic framework has the advantage of taking into account these fluctuations at the particle scale, to understand their effect on the macroscopic variables such as sediment flux. In this framework, bedload transport is seen as the random motion of particles (sand, gravel, pebbles...) over a two-dimensional surface (the river bed). The number of particles in motion, as well as their velocities, are random variables. In this talk, we show how a simple birth-death Markov model governing particle motion on a regular lattice accurately reproduces the spatio-temporal correlations observed at the macroscopic level. Entrainment, deposition and transport of particles by the turbulent fluid (air or water) are supposed to be independent and memoryless processes that modify the number of particles in motion. By means of the Poisson representation, we obtained a Fokker-Planck equation that is exactly equivalent to the master equation and thus valid for all cell sizes. The analysis shows that the number of moving particles evolves locally far from thermodynamic equilibrium. Several analytical results are presented and compared to experimental data. The index of dispersion (or variance over mean ratio) is proved to grow from unity at small scales to larger values at larger scales confirming the non Poisonnian behavior of bedload transport. Also, we study the one and two dimensional K-function, which gives the average number of moving particles located in a ball centered at a particle centroid function of the ball's radius.
Gergi, Richard; Rjeily, Joe Abou; Sader, Joseph; Naaman, Alfred
2010-05-01
The purpose of this study was to compare canal transportation and centering ability of 2 rotary nickel-titanium (NiTi) systems (Twisted Files [TF] and Pathfile-ProTaper [PP]) with conventional stainless steel K-files. Ninety root canals with severe curvature and short radius were selected. Canals were divided randomly into 3 groups of 30 each. After preparation with TF, PP, and stainless steel files, the amount of transportation that occurred was assessed by using computed tomography. Three sections from apical, mid-root, and coronal levels of the canal were recorded. Amount of transportation and centering ability were assessed. The 3 groups were statistically compared with analysis of variance and Tukey honestly significant difference test. Less transportation and better centering ability occurred with TF rotary instruments (P < .0001). K-files showed the highest transportation followed by PP system. PP system showed significant transportation when compared with TF (P < .0001). The TF system was found to be the best for all variables measured in this study. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Improved high-dimensional prediction with Random Forests by the use of co-data.
Te Beest, Dennis E; Mes, Steven W; Wilting, Saskia M; Brakenhoff, Ruud H; van de Wiel, Mark A
2017-12-28
Prediction in high dimensional settings is difficult due to the large number of variables relative to the sample size. We demonstrate how auxiliary 'co-data' can be used to improve the performance of a Random Forest in such a setting. Co-data are incorporated in the Random Forest by replacing the uniform sampling probabilities that are used to draw candidate variables by co-data moderated sampling probabilities. Co-data here are defined as any type information that is available on the variables of the primary data, but does not use its response labels. These moderated sampling probabilities are, inspired by empirical Bayes, learned from the data at hand. We demonstrate the co-data moderated Random Forest (CoRF) with two examples. In the first example we aim to predict the presence of a lymph node metastasis with gene expression data. We demonstrate how a set of external p-values, a gene signature, and the correlation between gene expression and DNA copy number can improve the predictive performance. In the second example we demonstrate how the prediction of cervical (pre-)cancer with methylation data can be improved by including the location of the probe relative to the known CpG islands, the number of CpG sites targeted by a probe, and a set of p-values from a related study. The proposed method is able to utilize auxiliary co-data to improve the performance of a Random Forest.
Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N
2018-05-01
Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.
Nell, Andre S; D'lom, Eva; Bouic, Patrick; Sabaté, Montserrat; Bosser, Ramon; Picas, Jordi; Amat, Mercè; Churchyard, Gavin; Cardona, Pere-Joan
2014-01-01
To evaluate the safety, tolerability and immunogenicity of three different doses (5, 25 and 50 µg) of the novel antituberculous vaccine RUTI compared to placebo in subjects with latent tuberculosis infection. Double-blind, randomized, placebo-controlled Phase II Clinical Trial (95 patients randomized). Three different RUTI doses and placebo were tested, randomized both in HIV-positive (n = 47) and HIV-negative subjects (n = 48), after completion of one month isoniazid (INH) pre-vaccination. Each subject received two vaccine administrations, 28 Days apart. Five patients withdrew and 90 patients completed the study. Assessment of safety showed no deaths during study. Two subjects had serious adverse events one had a retinal detachment while taking INH and was not randomized and the other had a severe local injection site abscess on each arm and was hospitalized; causality was assessed as very likely and by the end of the study the outcome had resolved. All the patients except 5 (21%) patients of the placebo group (3 HIV+ and 2 HIV-) reported at least one adverse event (AE) during the study. The most frequently occurring AEs among RUTI recipients were (% in HIV+/-): injection site reactions [erythema (91/92), induration (94/92), local nodules (46/25), local pain (66/75), sterile abscess (6/6), swelling (74/83), ulcer (20/11), headache (17/22) and nasopharyngitis (20/5)]. These events were mostly mild and well tolerated. Overall, a polyantigenic response was observed, which differed by HIV- status. The best polyantigenic response was obtained when administrating 25 µg RUTI, especially in HIV-positive subjects which was not increased after the second inoculation. This Phase II clinical trial demonstrates reasonable tolerability of RUTI. The immunogenicity profile of RUTI vaccine in LTBI subjects, even being variable among groups, allows us considering one single injection of one of the highest doses in future trials, preceded by an extended safety clinical phase. ClinicalTrials.gov NCT01136161.
Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J
2011-11-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries
NASA Astrophysics Data System (ADS)
Shafiey, Hassan; Gan, Xinjun; Waxman, David
2017-11-01
To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.
Zhang, Kai; Li, Yun; Schwartz, Joel D.; O'Neill, Marie S.
2014-01-01
Hot weather increases risk of mortality. Previous studies used different sets of weather variables to characterize heat stress, resulting in variation in heat-mortality- associations depending on the metric used. We employed a statistical learning method – random forests – to examine which of various weather variables had the greatest impact on heat-related mortality. We compiled a summertime daily weather and mortality counts dataset from four U.S. cities (Chicago, IL; Detroit, MI; Philadelphia, PA; and Phoenix, AZ) from 1998 to 2006. A variety of weather variables were ranked in predicting deviation from typical daily all-cause and cause-specific death counts. Ranks of weather variables varied with city and health outcome. Apparent temperature appeared to be the most important predictor of heat-related mortality for all-cause mortality. Absolute humidity was, on average, most frequently selected one of the top variables for all-cause mortality and seven cause-specific mortality categories. Our analysis affirms that apparent temperature is a reasonable variable for activating heat alerts and warnings, which are commonly based on predictions of total mortality in next few days. Additionally, absolute humidity should be included in future heat-health studies. Finally, random forests can be used to guide choice of weather variables in heat epidemiology studies. PMID:24834832
NASA Astrophysics Data System (ADS)
Chaibub Neto, Elias
2016-11-01
Clinical trials traditionally employ blinding as a design mechanism to reduce the influence of placebo effects. In practice, however, it can be difficult or impossible to blind study participants and unblinded trials are common in medical research. Here we show how instrumental variables can be used to quantify and disentangle treatment and placebo effects in randomized clinical trials comparing control and active treatments in the presence of confounders. The key idea is to use randomization to separately manipulate treatment assignment and psychological encouragement conversations/interactions that increase the participants’ desire for improved symptoms. The proposed approach is able to improve the estimation of treatment effects in blinded studies and, most importantly, opens the doors to account for placebo effects in unblinded trials.
A probabilistic model of a porous heat exchanger
NASA Technical Reports Server (NTRS)
Agrawal, O. P.; Lin, X. A.
1995-01-01
This paper presents a probabilistic one-dimensional finite element model for heat transfer processes in porous heat exchangers. The Galerkin approach is used to develop the finite element matrices. Some of the submatrices are asymmetric due to the presence of the flow term. The Neumann expansion is used to write the temperature distribution as a series of random variables, and the expectation operator is applied to obtain the mean and deviation statistics. To demonstrate the feasibility of the formulation, a one-dimensional model of heat transfer phenomenon in superfluid flow through a porous media is considered. Results of this formulation agree well with the Monte-Carlo simulations and the analytical solutions. Although the numerical experiments are confined to parametric random variables, a formulation is presented to account for the random spatial variations.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
Dinglas, Victor D; Huang, Minxuan; Sepulveda, Kristin A; Pinedo, Mariela; Hopkins, Ramona O; Colantuoni, Elizabeth; Needham, Dale M
2015-01-09
Effective strategies for contacting and recruiting study participants are critical in conducting clinical research. In this study, we conducted two sequential randomized controlled trials of mail- and telephone-based strategies for contacting and recruiting participants, and evaluated participant-related variables' association with time to survey completion and survey completion rates. Subjects eligible for this study were survivors of acute lung injury who had been previously enrolled in a 12-month observational follow-up study evaluating their physical, cognitive and mental health outcomes, with their last study visit completed at a median of 34 months previously. Eligible subjects were contacted to complete a new research survey as part of two randomized trials, initially using a randomized mail-based contact strategy, followed by a randomized telephone-based contact strategy for non-responders to the mail strategy. Both strategies focused on using either a personalized versus a generic approach. In addition, 18 potentially relevant subject-related variables (e.g., demographics, last known physical and mental health status) were evaluated for association with time to survey completion. Of 308 eligible subjects, 67% completed the survey with a median (IQR) of 3 (2, 5) contact attempts required. There was no significant difference in the time to survey completion for either randomized trial of mail- or phone-based contact strategy. Among all subject-related variables, age ≤40 years and minority race were independently associated with a longer time to survey completion. We found that age ≤40 years and minority race were associated with a longer time to survey completion, but personalized versus generic approaches to mail- and telephone-based contact strategies had no significant effect. Repeating both mail and telephone contact attempts was important for increasing survey completion rate. NCT00719446.
Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.
2014-01-01
Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504
Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H
2014-11-01
Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.
Alternate methods for FAAT S-curve generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, A.M.
The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less
Beidel, Deborah C.; Frueh, B. Christopher; Uhde, Thomas W.; Wong, Nina; Mentrikoski, Janelle M.
2010-01-01
This study examined the efficacy of a multicomponent cognitive-behavioral therapy, Trauma Management Therapy, which combines exposure therapy and social emotional rehabilitation, to exposure therapy only in a group of male combat veterans with chronic posttraumatic stress disorder (PTSD). Thirty-five male Vietnam veterans with PTSD were randomly assigned to receive either Trauma Management Therapy (TMT) or Exposure Therapy Only (EXP). Participants were assessed at pre-treatment, mid-treatment, and post-treatment. Primary clinical outcomes were reduction of PTSD symptoms and improved social emotional functioning. Results indicated that veterans in both conditions showed statistically significant and clinically meaningful reductions in PTSD symptoms from pre- to post-treatment, though consistent with a priori hypotheses there were no group differences on PTSD variables. However, compared to the EXP group, participants in the TMT group showed increased frequency in social activities and greater time spent in social activities. These changes occurred from mid-treatment (after completion of exposure therapy) to post-treatment (after completion of the social emotional rehabilitation component); supporting the hypothesis that TMT alone would result in improved social functioning. Although the TMT group also had a significant decrease in episodes of physical rage, that change occurred prior to introduction of the social emotional component of TMT. This study demonstrates efficacy of exposure therapy for treating the core symptoms of PTSD among combat veterans with a severe and chronic form of this disorder. Moreover, multi-component CBT shows promise for improving social functioning beyond that provided by exposure therapy alone, particularly by increasing social engagement/interpersonal functioning in a cohort of veterans with severe and chronic PTSD. PMID:20951543
The role of trauma team leaders in missed injuries: does specialty matter?
Leeper, W Robert; Leeper, Terrence John; Vogt, Kelly Nancy; Charyk-Stewart, Tanya; Gray, Daryl Kenneth; Parry, Neil Geordie
2013-09-01
Previous studies have identified missed injuries as a common and potentially preventable occurrence in trauma care. Several patient- and injury-related variables have been identified, which predict for missed injuries; however, differences in rate and severity of missed injuries between surgeon and nonsurgeon trauma team leaders (TTLs) have not previously been reported. A retrospective review was conducted on a random sample of 10% of all trauma patients (Injury Severity Score [ISS] > 12) from 1999 to 2009 at a Canadian Level I trauma center. Missed injuries were defined as those identified greater than 24 hours after presentation and were independently adjudicated by two reviewers. TTLs were identified as either surgeons or nonsurgeons. Of our total trauma population of 2,956 patients, 300 charts were randomly pulled for detailed review. Missed injuries occurred in 46 patients (15%). Most common missed injuries were fractures (n = 32, 70%) and thoracic injuries (n = 23, 50%). The majority of missed injuries resulted in minor morbidity with only 5 (11%) requiring operative intervention. On univariate analysis, higher ISS (p < 0.01), higher maximum Abbreviated Injury Scale (MAIS) score of the thorax (p < 0.01), and nonsurgeon TTL status were predictive of missed injuries (p = 0.02). Multivariable logistic regression revealed that, after adjustment for age, ISS, and severe head injuries, the presence of a nonsurgeon TTL was associated with an increased odds of missed injury (odds ratio, 2.15; 95% confidence interval, 1.10-4.20). Missed injuries occurred in 15% of patients. A unique finding was the increased odds of missed injury with nonsurgeon TTLs. Further research should be undertaken to explore this relationship, elucidate potential causes, and propose interventions to narrow this discrepancy between TTL provider types. Therapeutic study, level IV. Prognostic and epidemiologic study, level III.
Distiller, Larry A; Nortje, Hendrik; Wellmann, Holger; Amod, Aslam; Lombard, Landman
2014-11-01
To compare the efficacy of 500 U/mL (U-500) regular insulin + metformin with U-500 regular insulin + metformin + exenatide in improving glycemic control in patients with severely insulin-resistant type 2 diabetes mellitus (T2DM). Thirty patients with T2DM and severe insulin resistance were screened, and 28 were randomized to regular insulin U-500 + metformin or the GLP-1 analog exenatide, U-500, and metformin. Glycated hemoglobin (HbA1c) levels, body weight, and insulin doses were documented at baseline and at 3 and 6 months. The number and severity hypoglycemic episodes were noted. There were 7 males and 7 females in each group (U-500 + metformin and U-500 + metformin + exenatide). Overall, U-500 insulin + metformin, either alone or with the addition of exenatide, resulted in a significant improvement in HbA1c in both groups, with no significant difference between the 2 groups. There was no meaningful weight change in those utilizing exenatide. Those on U-500 insulin and metformin alone had a tendency toward some weight gain. No severe hypoglycemia occurred during the study period. Symptomatic hypoglycemia was more common in the group on exenatide, but this occurred in only 5 patients, and the clinical significance of this is uncertain. Insulin dosage changes on U-500 regular insulin were variable but tended to be lower in those subjects on exenatide. U-500 regular insulin + metformin is effective for the treatment of T2DM patients with severe insulin resistance. The addition of exenatide may ameliorate potential weight gain but provides no additional improvement in glycemia.
A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses
USDA-ARS?s Scientific Manuscript database
Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...
Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R
2016-10-01
Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
CDC6600 subroutine for normal random variables. [RVNORM (RMU, SIG)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amos, D.E.
1977-04-01
A value y for a uniform variable on (0,1) is generated and a table of 96-percent points for the (0,1) normal distribution is interpolated for a value of the normal variable x(0,1) on 0.02 less than or equal to y less than or equal to 0.98. For the tails, the inverse normal is computed by a rational Chebyshev approximation in an appropriate variable. Then X = x sigma + ..mu.. gives the X(..mu..,sigma) variable.
Greeven, Anja; van Balkom, Anton J L M; Spinhoven, Philip
2014-05-01
We aimed to investigate whether personality characteristics predict time to remission and psychiatric status. The follow-up was at most 6 years and was performed within the scope of a randomized controlled trial that investigated the efficacy of cognitive behavioral therapy, paroxetine, and placebo in hypochondriasis. The Life Chart Interview was administered to investigate for each year if remission had occurred. Personality was assessed at pretest by the Abbreviated Dutch Temperament and Character Inventory. Cox's regression models for recurrent events were compared with logistic regression models. Sixteen (36.4%) of 44 patients achieved remission during the follow-up period. Cox's regression yielded approximately the same results as the logistic regression. Being less harm avoidant and more cooperative were associated with a shorter time to remission and a remitted state after the follow-up period. Personality variables seem to be relevant for describing patients with a more chronic course of hypochondriacal complaints.
A stochastic bioburden model for spacecraft sterilization.
NASA Technical Reports Server (NTRS)
Roark, A. L.
1972-01-01
Development of a stochastic model of the probability distribution for the random variable representing the number of microorganisms on a surface as a function of time. The first basic principle associated with bioburden estimation is that viable particles are removed from surfaces. The second notion important to the analysis is that microorganisms in environments and on surfaces occur in clumps. The last basic principle relating to bioburden modeling is that viable particles are deposited on a surface. The bioburden on a spacecraft is determined by the amount and kind of control exercised on the spacecraft assembly location, the shedding characteristics of the individuals in the vicinity of the spacecraft, its orientation, the geographical location in which the assembly takes place, and the steps in the assembly procedure. The model presented has many of the features which are desirable for its use in the spacecraft sterilization programs currently being planned by NASA.
Distribution of runup heights of the December 26, 2004 tsunami in the Indian Ocean
NASA Astrophysics Data System (ADS)
Choi, Byung Ho; Hong, Sung Jin; Pelinovsky, Efim
2006-07-01
A massive earthquake with magnitude 9.3 occurred on December 26, 2004 off the northern Sumatra generated huge tsunami waves affected many coastal countries in the Indian Ocean. A number of field surveys have been performed after this tsunami event; in particular, several surveys in the south/east coast of India, Andaman and Nicobar Islands, Sri Lanka, Sumatra, Malaysia, and Thailand have been organized by the Korean Society of Coastal and Ocean Engineers from January to August 2005. Spatial distribution of the tsunami runup is used to analyze the distribution function of the wave heights on different coasts. Theoretical interpretation of this distribution is associated with random coastal bathymetry and coastline led to the log-normal functions. Observed data also are in a very good agreement with log-normal distribution confirming the important role of the variable ocean bathymetry in the formation of the irregular wave height distribution along the coasts.
Lampoudi, Sotiria; Gillespie, Dan T; Petzold, Linda R
2009-03-07
The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.
The Effects of Meperidine Analgesia during Labor on Fetal Heart Rate
Sekhavat, Leila; Behdad, Shecoofah
2009-01-01
To estimate the effects of intramuscular meperidine analgesia on fetal heart rate (FHR) patterns compared with placebo. In a prospective randomized study, 150 healthy women with singleton term pregnancy requesting analgesia during active labor were planned to receive either intramuscular meperidin 50 mg (meperidin group) or normal saline (control group) when they requested analgesia. Fetal heart rate patterns occurring within 40 minutes of initiation of labor analgesia were retrospectively read by maternal fetal medicine specialist who was blind to type of labor analgesia. Meperidine, compared with placebo, was associated with statistically significantly less beat to beat variability (absent or less than 5 beats per minute) (28% versus 5% of fetuses, P<0.05), lower proportion of accelerations (37.3% versus 17.3% P<0.05) and of the FHR. Also FHR deceleration was significantly more than control group (25.5% versus 4%, P<0.05). Meperidine has deleterious effects on FHR. PMID:23675116
Data on the fungal species consumed by mammal species in Australia.
Nuske, S J; Vernes, K; May, T W; Claridge, A W; Congdon, B C; Krockenberger, A; Abell, S E
2017-06-01
The data reported here support the manuscript Nuske et al. (2017) [1]. Searches were made for quantitative data on the occurrence of fungi within dietary studies of Australian mammal species. The original location reported in each study was used as the lowest grouping variable within the dataset. To standardise the data and compare dispersal events from populations of different mammal species that might overlap, data from locations were further pooled and averaged across sites if they occurred within 100 km of a random central point. Three locations in Australia contained data on several (>7) mycophagous mammals, all other locations had data on 1-3 mammal species. Within these three locations, the identity of the fungi species was compared between mammal species' diets. A list of all fungi species found in Australian mammalian diets is also provide along with the original reference and fungal synonym names.
Robustness of plasmon phased array nanoantennas to disorder
Arango, Felipe Bernal; Thijssen, Rutger; Brenny, Benjamin; Coenen, Toon; Koenderink, A. Femius
2015-01-01
We present cathodoluminescence experiments that quantify the response of plasmonic Yagi-Uda antennas fabricated on one-dimensional silicon nitride waveguides as function of electron beam excitation position and emission wavelength. At the near-infrared antenna design wavelength cathodoluminescence signal robustly is strongest when exciting the antenna at the reflector element. Yet at just slightly shorter wavelengths the signal is highly variable from antenna to antenna and wavelength to wavelength. Hypothesizing that fabrication randomness is at play, we analyze the resilience of plasmon Yagi-Uda antennas to varations in element size of just 5 nm. While in our calculations the appearance of directivity is robust, both the obtained highest directivity and the wavelength at which it occurs vary markedly between realizations. The calculated local density of states is invariably high at the reflector for the design wavelength, but varies dramatically in spatial distribution for shorter wavelengths, consistent with the cathodoluminescence experiments. PMID:26038871
Potential use of TNF-α inhibitors in systemic sclerosis.
Murdaca, Giuseppe; Spanò, Francesca; Contatore, Miriam; Guastalla, Andrea; Puppo, Francesco
2014-01-01
Systemic sclerosis (SSc) is a rare connective tissue disease characterized by chronic inflammation and fibrosis of the skin, vascular abnormalities and variable involvement of organs. TNF-α has a central role in initial host response to infections and in the pathogenesis of various systemic immune-mediated diseases. Serum levels of TNF-α are elevated in patients with SSc and favor the development of pulmonary fibrosis and pulmonary arterial hypertension. Inflammatory arthritis can occur in patients with SSc. Infliximab and etanercept may improve the inflammatory arthritis and disability in SSc. TNF-α inhibitors reduce the systemic inflammation, improve the endothelial function decreasing the risk of pulmonary arterial hypertension progression and of acute cardiovascular and/or cerebrovascular events. Physicians need to be aware of the potential risks of tuberculosis reactivation and opportunistic infections. Randomized controlled trials with TNF-α inhibitors in patients with SSc are needed to confirm the potential role of these agents in the treatment of SSc.
NASA Astrophysics Data System (ADS)
Galati, Rosa; Simon, Charles; Henry, Paul F.; Weller, Mark T.
2008-03-01
Variable temperature, 2K
Metal-organic frameworks with dynamic interlocked components
NASA Astrophysics Data System (ADS)
Vukotic, V. Nicholas; Harris, Kristopher J.; Zhu, Kelong; Schurko, Robert W.; Loeb, Stephen J.
2012-06-01
The dynamics of mechanically interlocked molecules such as rotaxanes and catenanes have been studied in solution as examples of rudimentary molecular switches and machines, but in this medium, the molecules are randomly dispersed and their motion incoherent. As a strategy for achieving a higher level of molecular organization, we have constructed a metal-organic framework material using a [2]rotaxane as the organic linker and binuclear Cu(II) units as the nodes. Activation of the as-synthesized material creates a void space inside the rigid framework that allows the soft macrocyclic ring of the [2]rotaxane to rotate rapidly, unimpeded by neighbouring molecular components. Variable-temperature 13C and 2H solid-state NMR experiments are used to characterize the nature and rate of the dynamic processes occurring inside this unique material. These results provide a blueprint for the future creation of solid-state molecular switches and molecular machines based on mechanically interlocked molecules.
The Glimm scheme for perfect fluids on plane-symmetric Gowdy spacetimes
NASA Astrophysics Data System (ADS)
Barnes, A. P.; Lefloch, P. G.; Schmidt, B. G.; Stewart, J. M.
2004-11-01
We propose a new, augmented formulation of the coupled Euler Einstein equations for perfect fluids on plane-symmetric Gowdy spacetimes. The unknowns of the augmented system are the density and velocity of the fluid and the first- and second-order spacetime derivatives of the metric. We solve the Riemann problem for the augmented system, allowing propagating discontinuities in both the fluid variables and the first- and second-order derivatives of the geometry coefficients. Our main result, based on Glimm's random choice scheme, is the existence of solutions with bounded total variation of the Euler Einstein equations, up to the first time where a blow-up singularity (unbounded first-order derivatives of the geometry coefficients) occurs. We demonstrate the relevance of the augmented system for numerical relativity. We also consider general vacuum spacetimes and solve a Riemann problem, by relying on a theorem by Rendall on the characteristic value problem for the Einstein equations.
Clustering promotes switching dynamics in networks of noisy neurons
NASA Astrophysics Data System (ADS)
Franović, Igor; Klinshov, Vladimir
2018-02-01
Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.
Dairy consumption, systolic blood pressure, and risk of hypertension: Mendelian randomization study
USDA-ARS?s Scientific Manuscript database
Objective: To examine whether previous observed inverse associations of dairy intake with systolic blood pressure and risk of hypertension were causal. Design: Mendelian randomization study using the single nucleotide polymorphism rs4988235 related to lactase persistence as an instrumental variable...
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Schölzel, C.; Friederichs, P.
2008-10-01
Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF) tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.
Jolles, Mónica Pérez; Haynes-Maslow, Lindsey; Roberts, Megan C; Dusetzina, Stacie B
2015-08-01
Individuals with mental illness experience poor health and may die prematurely from chronic illness. Understanding whether the presence of co-occurring chronic physical health conditions complicates mental health treatment is important, particularly among patients seeking treatment in primary care settings. Examine (1) whether the presence of chronic physical conditions is associated with mental health service use for individuals with depression who visit a primary care physician, and (2) whether race modifies this relationship. Secondary analysis of the National Ambulatory Medical Care Survey, a survey of patient-visits collected annually from a random sample of 3000 physicians in office-based settings. Office visits from 2007 to 2010 were pooled for adults aged 35-85 with a depression diagnosis at the time of visit (N=3659 visits). Mental health services were measured using a dichotomous variable indicating whether mental health services were provided during the office visit or a referral made for: (1) counseling, including psychotherapy and other mental health counseling and/or (2) prescribing of psychotropic medications. Most patient office visits (70%) where a depression diagnosis was recorded also had co-occurring chronic physical conditions recorded. The presence of at least 1 physical chronic condition was associated with a 6% decrease in the probability of receiving any mental health services (P<0.05). There were no differences in service use by race/ethnicity after controlling for other factors. Additional research is needed on medical care delivery among patients with co-occurring health conditions, particularly as the health care system moves toward an integrated care model.
Multivariate analysis: greater insights into complex systems
USDA-ARS?s Scientific Manuscript database
Many agronomic researchers measure and collect multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate (MV) statistical methods encompass the simultaneous analysis of all random variables (RV) measured on each experimental or sampling ...
Scenarios for Motivating the Learning of Variability: An Example in Finances
ERIC Educational Resources Information Center
Cordani, Lisbeth K.
2013-01-01
This article explores an example in finances in order to motivate the random variable learning to the very beginners in statistics. In addition, it offers a relationship between standard deviation and range in a very specific situation.
Effect of Escitalopram on Hot Flash Interference: A Randomized, Controlled Trial
Carpenter, Janet S.; Guthrie, Katherine A.; Larson, Joseph C.; Freeman, Ellen W.; Joffe, Hadine; Reed, Susan D.; Ensrud, Kristine E.; LaCroix, Andrea Z.
2012-01-01
Objectives To estimate the effect of escitalopram 10–20 mg/day versus placebo for reducing hot flash interference in daily life and understand correlates and predictors of reductions in hot flash interference, a key measure of quality of life. Design Multi-site, randomized, double-blind, placebo-controlled clinical trial. Patients 205 midlife women (46% African-American) who met criteria participated. Setting MsFLASH clinical sites in Boston, Indianapolis, Oakland, and Philadelphia. Intervention After baseline, women were randomized to 1 pill of escitalopram 10 mg/day (n=104) or placebo (n=101) with follow-up at 4- and 8-weeks. At week 4, those not achieving 50% fewer hot flashes were increased to 2 pills daily (20 mg/day or 2 placebo pills). Main outcome measures The Hot Flash Related Daily Interference Scale; Correlates were variables from hot flash diaries; Predictors were baseline demographics, clinical variables, depression, anxiety, sleep quality, and hot flashes. Results Compared to placebo, escitalopram significantly reduced hot flash interference by 6.0 points at week 4 and 3.4 points at week 8 more than placebo (p=0.012). Reductions in hot flash interference correlated with changes in hot flash diary variables. However, baseline variables did not significantly predict reductions in hot flash interference. Conclusions Escitalopram 10–20mg/day for 8 weeks improves women’s quality of life and this benefit did not vary by demographic, clinical, mood, sleep, or hot flash variables. PMID:22480818
Recourse-based facility-location problems in hybrid uncertain environment.
Wang, Shuming; Watada, Junzo; Pedrycz, Witold
2010-08-01
The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
Grace, Miriam; Hütt, Marc-Thorsten
2015-01-01
Spatiotemporal patterns often emerge from local interactions in a self-organizing fashion. In biology, the resulting patterns are also subject to the influence of the systematic differences between the system’s constituents (biological variability). This regulation of spatiotemporal patterns by biological variability is the topic of our review. We discuss several examples of correlations between cell properties and the self-organized spatiotemporal patterns, together with their relevance for biology. Our guiding, illustrative example will be spiral waves of cAMP in a colony of Dictyostelium discoideum cells. Analogous processes take place in diverse situations (such as cardiac tissue, where spiral waves occur in potentially fatal ventricular fibrillation) so a deeper understanding of this additional layer of self-organized pattern formation would be beneficial to a wide range of applications. One of the most striking differences between pattern-forming systems in physics or chemistry and those in biology is the potential importance of variability. In the former, system components are essentially identical with random fluctuations determining the details of the self-organization process and the resulting patterns. In biology, due to variability, the properties of potentially very few cells can have a driving influence on the resulting asymptotic collective state of the colony. Variability is one means of implementing a few-element control on the collective mode. Regulatory architectures, parameters of signaling cascades, and properties of structure formation processes can be "reverse-engineered" from observed spatiotemporal patterns, as different types of regulation and forms of interactions between the constituents can lead to markedly different correlations. The power of this biology-inspired view of pattern formation lies in building a bridge between two scales: the patterns as a collective state of a very large number of cells on the one hand, and the internal parameters of the single cells on the other. PMID:26562406
Gray, B.R.; Rogala, J.R.; Houser, J.N.
2013-01-01
Contiguous floodplain lakes ('lakes') have historically been used as study units for comparative studies of limnological variables that vary within lakes. The hierarchical nature of these studies implies that study variables may be correlated within lakes and that covariate associations may differ not only among lakes but also by spatial scale. We evaluated the utility of treating lakes as study units for limnological variables that vary within lakes based on the criteria of important levels of among-lake variation in study variables and the observation of covariate associations that vary among lakes. These concerns were selected, respectively, to ensure that lake signatures were distinguishable from within-lake variation and that lake-scale effects on covariate associations might provide inferences not available by ignoring those effects. Study data represented chlorophyll a (CHL) and inorganic suspended solids (ISS) data from lakes within three reaches of the Upper Mississippi River. Sampling occurred in summer from 1993 through 2005 (except 2003); numbers of lakes per reach varied from 7 to 19, and median lake area varied from 53 to 101 ha. CHL and ISS levels were modelled linearly, with lake, year and lake x year effects treated as random. For all reaches, the proportions of variation in CHL and ISS attributable to differences among lakes (including lake and lake x year effects) were substantial (range: 18%-73%). Finally, among-lake variation in CHL and ISS was strongly associated with covariates and covariate effects that varied by lakes or lake-years (including with vegetation levels and, for CHL, log(ISS)). These findings demonstrate the utility of treating floodplain lakes as study units for the study of limnological variables and the importance of addressing hierarchy within study designs when making inferences from data collected within floodplain lakes.
Teets, Aaron; Fraver, Shawn; Weiskittel, Aaron R; Hollinger, David Y
2018-03-11
A range of environmental factors regulate tree growth; however, climate is generally thought to most strongly influence year-to-year variability in growth. Numerous dendrochronological (tree-ring) studies have identified climate factors that influence year-to-year variability in growth for given tree species and location. However, traditional dendrochronology methods have limitations that prevent them from adequately assessing stand-level (as opposed to species-level) growth. We argue that stand-level growth analyses provide a more meaningful assessment of forest response to climate fluctuations, as well as the management options that may be employed to sustain forest productivity. Working in a mature, mixed-species stand at the Howland Research Forest of central Maine, USA, we used two alternatives to traditional dendrochronological analyses by (1) selecting trees for coring using a stratified (by size and species), random sampling method that ensures a representative sample of the stand, and (2) converting ring widths to biomass increments, which once summed, produced a representation of stand-level growth, while maintaining species identities or canopy position if needed. We then tested the relative influence of seasonal climate variables on year-to-year variability in the biomass increment using generalized least squares regression, while accounting for temporal autocorrelation. Our results indicate that stand-level growth responded most strongly to previous summer and current spring climate variables, resulting from a combination of individualistic climate responses occurring at the species- and canopy-position level. Our climate models were better fit to stand-level biomass increment than to species-level or canopy-position summaries. The relative growth responses (i.e., percent change) predicted from the most influential climate variables indicate stand-level growth varies less from to year-to-year than species-level or canopy-position growth responses. By assessing stand-level growth response to climate, we provide an alternative perspective on climate-growth relationships of forests, improving our understanding of forest growth dynamics under a fluctuating climate. © 2018 John Wiley & Sons Ltd.
Prediction of road accidents: A Bayesian hierarchical approach.
Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H
2013-03-01
In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any road network provided that the required data are available. Copyright © 2012 Elsevier Ltd. All rights reserved.
Epidemics in networks: a master equation approach
NASA Astrophysics Data System (ADS)
Cotacallapa, M.; Hase, M. O.
2016-02-01
A problem closely related to epidemiology, where a subgraph of ‘infected’ links is defined inside a larger network, is investigated. This subgraph is generated from the underlying network by a random variable, which decides whether a link is able to propagate a disease/information. The relaxation timescale of this random variable is examined in both annealed and quenched limits, and the effectiveness of propagation of disease/information is analyzed. The dynamics of the model is governed by a master equation and two types of underlying network are considered: one is scale-free and the other has exponential degree distribution. We have shown that the relaxation timescale of the contagion variable has a major influence on the topology of the subgraph of infected links, which determines the efficiency of spreading of disease/information over the network.
Meta-analysis of correlates of provider behavior in medical encounters.
Hall, J A; Roter, D L; Katz, N R
1988-07-01
This article summarizes the results of 41 independent studies containing correlates of objectively measured provider behaviors in medical encounters. Provider behaviors were grouped a priori into the process categories of information giving, questions, competence, partnership building, and socioemotional behavior. Total amount of communication was also included. All correlations between variables within these categories and external variables (patient outcome variables or patient and provider background variables) were extracted. The most frequently occurring outcome variables were satisfaction, recall, and compliance, and the most frequently occurring background variables were the patient's gender, age, and social class. Average correlations and combined significance levels were calculated for each combination of process category and external variable. Results showed significant relations of small to moderate average magnitude between these external variables and almost all of the provider behavior categories. A theory of provider-patient reciprocation is proposed to account for the pattern of results.
Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane
2017-01-01
Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait-environment combinations, the fourth-corner test is shown to give almost the same results as the GLM-based tests in far less computing time.
Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang
2018-03-01
This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.
Meurer, Michael; Eichenfield, Lawrence F; Ho, Vincent; Potter, Paul C; Werfel, Thomas; Hultsch, Thomas
2010-05-01
Pimecrolimus and topical corticosteroids (TCS) combination therapy may provide an alternative treatment for patients with severe atopic dermatitis (AD), with faster clearance of disease flares, consequently reducing the duration of TCS treatment. To assess the safety profile of pimecrolimus cream 1% combined with fluticasone versus fluticasone alone in paediatric patients with severe AD. Patients (n = 376) were randomized to a combination of pimecrolimus cream 1% with fluticasone or vehicle plus fluticasone for 4 weeks. The primary outcome measure was the frequency of clinically relevant pre-defined adverse events (AEs) associated with the topical use of corticosteroids in patients with severe AD. Erythematous rash was the only AE, occurring more frequently in the combination group, while there were no noticeable differences in the frequency of other AEs of clinical interest between treatment groups. Efficacy variables were comparable between the two groups. A trend for greater time to relapse was observed for the combination of pimecrolimus cream 1% with fluticasone in patients who were clear at the end of treatment, with a marked improvement in facial AD. In paediatric patients with severe AD the overall safety profile of pimecrolimus cream 1% combined with fluticasone was similar to that of fluticasone alone.
Epidemiological characteristics of cases of death from tuberculosis and vulnerable territories1
Yamamura, Mellina; Santos-Neto, Marcelino; dos Santos, Rebeca Augusto Neman; Garcia, Maria Concebida da Cunha; Nogueira, Jordana de Almeida; Arcêncio, Ricardo Alexandre
2015-01-01
Objective: to characterize the differences in the clinical and epidemiological profile of cases of death that had tuberculosis as an immediate or associated cause, and to analyze the spatial distribution of the cases of death from tuberculosis within the territories of Ribeirão Preto, Brazil. Method: an ecological study, in which the population consisted of 114 cases of death from tuberculosis. Bivariate analysis was carried out, as well as point density analysis, defined with the Kernel estimate. Results: of the cases of death from tuberculosis, 50 were the immediate cause and 64 an associated cause. Age (p=.008) and sector responsible for the death certificate (p=.003) were the variables that presented statistically significant associations with the cause of death. The spatial distribution, in both events, did not occur randomly, forming clusters in areas of the municipality. Conclusion: the difference in the profiles of the cases of death from tuberculosis, as a basic cause and as an associated cause, was governed by the age and the sector responsible for the completion of the death certificate. The non-randomness of the spatial distribution of the cases suggests areas that are vulnerable to these events. Knowing these areas can contribute to the choice of disease control strategies. PMID:26487142
NASA Astrophysics Data System (ADS)
Wang, Lianfeng; Yan, Biao; Guo, Lijie; Gu, Dongdong
2018-04-01
A newly transient mesoscopic model with a randomly packed powder-bed has been proposed to investigate the heat and mass transfer and laser process quality between neighboring tracks during selective laser melting (SLM) AlSi12 alloy by finite volume method (FVM), considering the solid/liquid phase transition, variable temperature-dependent properties and interfacial force. The results apparently revealed that both the operating temperature and resultant cooling rate were obviously elevated by increasing the laser power. Accordingly, the resultant viscosity of liquid significantly reduced under a large laser power and was characterized with a large velocity, which was prone to result in a more intensive convection within pool. In this case, the sufficient heat and mass transfer occurred at the interface between the previously fabricated tracks and currently building track, revealing a strongly sufficient spreading between the neighboring tracks and a resultant high-quality surface without obvious porosity. By contrast, the surface quality of SLM-processed components with a relatively low laser power notably weakened due to the limited and insufficient heat and mass transfer at the interface of neighboring tracks. Furthermore, the experimental surface morphologies of the top surface were correspondingly acquired and were in full accordance to the calculated results via simulation.
Nakaoka, Yuki; Kimura, Akatsuki; Tani, Tomomi; Goshima, Gohta
2015-01-01
The mechanism underlying microtubule (MT) generation in plants has been primarily studied using the cortical MT array, in which fixed-angled branching nucleation and katanin-dependent MT severing predominate. However, little is known about MT generation in the endoplasm. Here, we explored the mechanism of endoplasmic MT generation in protonemal cells of Physcomitrella patens. We developed an assay that utilizes flow cell and oblique illumination fluorescence microscopy, which allowed visualization and quantification of individual MT dynamics. MT severing was infrequently observed, and disruption of katanin did not severely affect MT generation. Branching nucleation was observed, but it showed markedly variable branch angles and was occasionally accompanied by the transport of nucleated MTs. Cytoplasmic nucleation at seemingly random locations was most frequently observed and predominated when depolymerized MTs were regrown. The MT nucleator γ-tubulin was detected at the majority of the nucleation sites, at which a single MT was generated in random directions. When γ-tubulin was knocked down, MT generation was significantly delayed in the regrowth assay. However, nucleation occurred at a normal frequency in steady state, suggesting the presence of a γ-tubulin-independent backup mechanism. Thus, endoplasmic MTs in this cell type are generated in a less ordered manner, showing a broader spectrum of nucleation mechanisms in plants. PMID:25616870
2011-01-01
Background Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. Methods The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. Results The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. Conclusions The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance. PMID:21599963
Benchmarking dairy herd health status using routinely recorded herd summary data.
Parker Gaddis, K L; Cole, J B; Clay, J S; Maltecca, C
2016-02-01
Genetic improvement of dairy cattle health through the use of producer-recorded data has been determined to be feasible. Low estimated heritabilities indicate that genetic progress will be slow. Variation observed in lowly heritable traits can largely be attributed to nongenetic factors, such as the environment. More rapid improvement of dairy cattle health may be attainable if herd health programs incorporate environmental and managerial aspects. More than 1,100 herd characteristics are regularly recorded on farm test-days. We combined these data with producer-recorded health event data, and parametric and nonparametric models were used to benchmark herd and cow health status. Health events were grouped into 3 categories for analyses: mastitis, reproductive, and metabolic. Both herd incidence and individual incidence were used as dependent variables. Models implemented included stepwise logistic regression, support vector machines, and random forests. At both the herd and individual levels, random forest models attained the highest accuracy for predicting health status in all health event categories when evaluated with 10-fold cross-validation. Accuracy (SD) ranged from 0.61 (0.04) to 0.63 (0.04) when using random forest models at the herd level. Accuracy of prediction (SD) at the individual cow level ranged from 0.87 (0.06) to 0.93 (0.001) with random forest models. Highly significant variables and key words from logistic regression and random forest models were also investigated. All models identified several of the same key factors for each health event category, including movement out of the herd, size of the herd, and weather-related variables. We concluded that benchmarking health status using routinely collected herd data is feasible. Nonparametric models were better suited to handle this complex data with numerous variables. These data mining techniques were able to perform prediction of health status and could add evidence to personal experience in herd management. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A tool for estimating impact of construction quality on life cycle performance of pavements.
DOT National Transportation Integrated Search
2005-11-01
Almost every constructed road develops distresses randomly in different subsections of the pavement. One reason for the random development of distress is the variability in construction quality. As such the goal in this project is to devise a tool th...
Use of adjoint methods in the probabilistic finite element approach to fracture mechanics
NASA Technical Reports Server (NTRS)
Liu, Wing Kam; Besterfield, Glen; Lawrence, Mark; Belytschko, Ted
1988-01-01
The adjoint method approach to probabilistic finite element methods (PFEM) is presented. When the number of objective functions is small compared to the number of random variables, the adjoint method is far superior to the direct method in evaluating the objective function derivatives with respect to the random variables. The PFEM is extended to probabilistic fracture mechanics (PFM) using an element which has the near crack-tip singular strain field embedded. Since only two objective functions (i.e., mode I and II stress intensity factors) are needed for PFM, the adjoint method is well suited.
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
1994-03-01
FSK 16. PmCI coot 17. SECURITY CLASSWsAI1OW IL SICUURW CLA$SIICATION SECURITY CLASSIICATION 20. LIMIATION Of ABSTRACT CW REPOW ? OF TiNS PAU OF ...hop k of a symbol when partial-band interference is present is obtained from (11) and the linear transformation of random variables given by (3) as...from (13) and the transformation of random variables indicated by (9) as [16] fzwjm(zwik) = f cTak!X. (Xmk, = ZmkOkI17) f~(0,kdo . -- (,.U(zk’ )fE2
Quantifying and mapping spatial variability in simulated forest plots
Gavin R. Corral; Harold E. Burkhart
2016-01-01
We used computer simulations to test the efficacy of multivariate statistical methods to detect, quantify, and map spatial variability of forest stands. Simulated stands were developed of regularly-spaced plantations of loblolly pine (Pinus taeda L.). We assumed no affects of competition or mortality, but random variability was added to individual tree characteristics...
Dynamic Quantum Allocation and Swap-Time Variability in Time-Sharing Operating Systems.
ERIC Educational Resources Information Center
Bhat, U. Narayan; Nance, Richard E.
The effects of dynamic quantum allocation and swap-time variability on central processing unit (CPU) behavior are investigated using a model that allows both quantum length and swap-time to be state-dependent random variables. Effective CPU utilization is defined to be the proportion of a CPU busy period that is devoted to program processing, i.e.…
ERIC Educational Resources Information Center
Reardon, Sean F.; Unlu, Fatih; Zhu, Pei; Bloom, Howard S.
2014-01-01
We explore the use of instrumental variables (IV) analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, an assumption known in the IV literature as the exclusion restriction.…
Applied statistics in agricultural, biological, and environmental sciences.
USDA-ARS?s Scientific Manuscript database
Agronomic research often involves measurement and collection of multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate statistical methods encompass the simultaneous analysis of all random variables measured on each experimental or s...
Critical Behavior of the Annealed Ising Model on Random Regular Graphs
NASA Astrophysics Data System (ADS)
Can, Van Hao
2017-11-01
In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.
Exponential gain of randomness certified by quantum contextuality
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan
2017-04-01
We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.
Kronholm, Scott C.; Capel, Paul D.; Terziotti, Silvia
2016-01-01
Accurate estimation of total nitrogen loads is essential for evaluating conditions in the aquatic environment. Extrapolation of estimates beyond measured streams will greatly expand our understanding of total nitrogen loading to streams. Recursive partitioning and random forest regression were used to assess 85 geospatial, environmental, and watershed variables across 636 small (<585 km2) watersheds to determine which variables are fundamentally important to the estimation of annual loads of total nitrogen. Initial analysis led to the splitting of watersheds into three groups based on predominant land use (agricultural, developed, and undeveloped). Nitrogen application, agricultural and developed land area, and impervious or developed land in the 100-m stream buffer were commonly extracted variables by both recursive partitioning and random forest regression. A series of multiple linear regression equations utilizing the extracted variables were created and applied to the watersheds. As few as three variables explained as much as 76 % of the variability in total nitrogen loads for watersheds with predominantly agricultural land use. Catchment-scale national maps were generated to visualize the total nitrogen loads and yields across the USA. The estimates provided by these models can inform water managers and help identify areas where more in-depth monitoring may be beneficial.
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance. PMID:28344429
Blood pressure variability of two ambulatory blood pressure monitors.
Kallem, Radhakrishna R; Meyers, Kevin E C; Cucchiara, Andrew J; Sawinski, Deirdre L; Townsend, Raymond R
2014-04-01
There are no data on the evaluation of blood pressure (BP) variability comparing two ambulatory blood pressure monitoring monitors worn at the same time. Hence, this study was carried out to compare variability of BP in healthy untreated adults using two ambulatory BP monitors worn at the same time over an 8-h period. An Accutorr device was used to measure office BP in the dominant and nondominant arms of 24 participants.Simultaneous 8-h BP and heart rate data were measured in 24 untreated adult volunteers by Mobil-O-Graph (worn for an additional 16 h after removing the Spacelabs monitor) and Spacelabs with both random (N=12) and nonrandom (N=12) assignment of each device to the dominant arm. Average real variability (ARV), SD, coefficient of variation, and variation independent of mean were calculated for systolic blood pressure, diastolic blood pressure, mean arterial pressure, and pulse pressure (PP). Whether the Mobil-O-Graph was applied to the dominant or the nondominant arm, the ARV of mean systolic (P=0.003 nonrandomized; P=0.010 randomized) and PP (P=0.009 nonrandomized; P=0.005 randomized) remained significantly higher than the Spacelabs device, whereas the ARV of the mean arterial pressure was not significantly different. The average BP readings and ARVs for systolic blood pressure and PP obtained by the Mobil-O-Graph were considerably higher for the daytime than the night-time. Given the emerging interest in the effect of BP variability on health outcomes, the accuracy of its measurement is important. Our study raises concerns about the accuracy of pooling international ambulatory blood pressure monitoring variability data using different devices.
Directionality theory and the evolution of body size.
Demetrius, L
2000-12-07
Directionality theory, a dynamic theory of evolution that integrates population genetics with demography, is based on the concept of evolutionary entropy, a measure of the variability in the age of reproducing individuals in a population. The main tenets of the theory are three principles relating the response to the ecological constraints a population experiences, with trends in entropy as the population evolves under mutation and natural selection. (i) Stationary size or fluctuations around a stationary size (bounded growth): a unidirectional increase in entropy; (ii) prolonged episodes of exponential growth (unbounded growth), large population size: a unidirectional decrease in entropy; and (iii) prolonged episodes of exponential growth (unbounded growth), small population size: random, non-directional change in entropy. We invoke these principles, together with an allometric relationship between entropy, and the morphometric variable body size, to provide evolutionary explanations of three empirical patterns pertaining to trends in body size, namely (i) Cope's rule, the tendency towards size increase within phyletic lineages; (ii) the island rule, which pertains to changes in body size that occur as species migrate from mainland populations to colonize island habitats; and (iii) Bergmann's rule, the tendency towards size increase with increasing latitude. The observation that these ecotypic patterns can be explained in terms of the directionality principles for entropy underscores the significance of evolutionary entropy as a unifying concept in forging a link between micro-evolution, the dynamics of gene frequency change, and macro-evolution, dynamic changes in morphometric variables.
Gout, Lilian; Eckert, Maria; Rouxel, Thierry; Balesdent, Marie-Hélène
2006-01-01
Leptosphaeria maculans is the most ubiquitous fungal pathogen of Brassica crops and causes the devastating stem canker disease of oilseed rape worldwide. We used minisatellite markers to determine the genetic structure of L. maculans in four field populations from France. Isolates were collected at three different spatial scales (leaf, 2-m2 field plot, and field) enabling the evaluation of spatial distribution of the mating type alleles and of genetic variability within and among field populations. Within each field population, no gametic disequilibrium between the minisatellite loci was detected and the mating type alleles were present at equal frequencies. Both sexual and asexual reproduction occur in the field, but the genetic structure of these populations is consistent with annual cycles of randomly mating sexual reproduction. All L. maculans field populations had a high level of gene diversity (H = 0.68 to 0.75) and genotypic diversity. Within each field population, the number of genotypes often was very close to the number of isolates. Analysis of molecular variance indicated that >99.5% of the total genetic variability was distributed at a small spatial scale, i.e., within 2-m2 field plots. Population differentiation among the four field populations was low (GST < 0.02), suggesting a high degree of gene exchange between these populations. The high gene flow evidenced here in French populations of L. maculans suggests a rapid countrywide diffusion of novel virulence alleles whenever novel resistance sources are used. PMID:16391041
Hybrid modeling of spatial continuity for application to numerical inverse problems
Friedel, Michael J.; Iwashita, Fabio
2013-01-01
A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.
Wijgerde, Tim; Dini-Andreote, Francisco; Eriksson, Britas Klemens; Salles, Joana Falcão; Pen, Ido; Wild, Christian
2017-01-01
Recent research suggests that prior exposure of several months to elevated irradiance induces enhanced thermal tolerance in scleractinian corals. While this tolerance has been reported at the species level, individual coral colonies may react differently due to individual variability in thermal tolerance. As thermal anomalies are predicted to become common in the upcoming future, intraspecific variation may be key to the survival of coral populations. In order to study light-history based thermal stress responses on individual colonies, we developed a preliminary microcosm experiment where three randomly chosen, aquacultured colonies of the model coral Stylophora pistillata were exposed to two irradiance treatments (200 and 400 μmol photons m−2 s−1) for 31 days, followed by artificially induced heat stress (∼33.4 °C). We found different responses to occur at both the intraspecific and the intracolonial levels, as indicated by either equal, less severe, delayed, and/or even non-necrotic responses of corals previously exposed to the irradiance of 400 compared to 200 μmol photons m−2 s−1. In addition, all individual colonies revealed light-enhanced calcification. Finally, elevated irradiance resulted in a lower chlorophyll a concentration in one colony compared to the control treatment, and the same colony displayed more rapid bleaching compared to the other ones. Taken together, this study highlights the potential importance of intra-individual variability in physiological responses of scleractinian corals and provides recommendations for improving methodological designs for future studies. PMID:29038747
Continuous-variable phase estimation with unitary and random linear disturbance
NASA Astrophysics Data System (ADS)
Delgado de Souza, Douglas; Genoni, Marco G.; Kim, M. S.
2014-10-01
We address the problem of continuous-variable quantum phase estimation in the presence of linear disturbance at the Hamiltonian level by means of Gaussian probe states. In particular we discuss both unitary and random disturbance by considering the parameter which characterizes the unwanted linear term present in the Hamiltonian as fixed (unitary disturbance) or random with a given probability distribution (random disturbance). We derive the optimal input Gaussian states at fixed energy, maximizing the quantum Fisher information over the squeezing angle and the squeezing energy fraction, and we discuss the scaling of the quantum Fisher information in terms of the output number of photons, nout. We observe that, in the case of unitary disturbance, the optimal state is a squeezed vacuum state and the quadratic scaling is conserved. As regards the random disturbance, we observe that the optimal squeezing fraction may not be equal to one and, for any nonzero value of the noise parameter, the quantum Fisher information scales linearly with the average number of photons. Finally, we discuss the performance of homodyne measurement by comparing the achievable precision with the ultimate limit imposed by the quantum Cramér-Rao bound.