Preference uncertainty, preference learning, and paired comparison experiments
David C. Kingsley; Thomas C. Brown
2010-01-01
Results from paired comparison experiments suggest that as respondents progress through a sequence of binary choices they become more consistent, apparently fine-tuning their preferences. Consistency may be indicated by the variance of the estimated valuation distribution measured by the error term in the random utility model. A significant reduction in the variance is...
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
Modeling and Recovery of Iron (Fe) from Red Mud by Coal Reduction
NASA Astrophysics Data System (ADS)
Zhao, Xiancong; Li, Hongxu; Wang, Lei; Zhang, Lifeng
Recovery of Fe from red mud has been studied using statistically designed experiments. The effects of three factors, namely: reduction temperature, reduction time and proportion of additive on recovery of Fe have been investigated. Experiments have been carried out using orthogonal central composite design and factorial design methods. A model has been obtained through variance analysis at 92.5% confidence level.
Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Word Durations in Non-Native English
Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.
2010-01-01
In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172
Variance Reduction in Simulation Experiments: A Mathematical-Statistical Framework.
1983-12-01
Handscomb (1964), Granovsky (1981), Rubinstein (1981), and Wilson (1983b). The use of conditional expectations (CE) will be described as the term is...8217- .. - - -f -. ""."-.-.’-..’.." . . ......... . -. . . --...... •- " --- . 106 Granovsky , B.L. (1981), "Optimal Formulae of the Conditional Monte
Some variance reduction methods for numerical stochastic homogenization
Blanc, X.; Le Bris, C.; Legoll, F.
2016-01-01
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
Importance Sampling Variance Reduction in GRESS ATMOSIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wakeford, Daniel Tyler
This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.
Ex Post Facto Monte Carlo Variance Reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, Thomas E.
The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less
Dynamic Repertoire of Intrinsic Brain States Is Reduced in Propofol-Induced Unconsciousness
Liu, Xiping; Pillay, Siveshigan
2015-01-01
Abstract The richness of conscious experience is thought to scale with the size of the repertoire of causal brain states, and it may be diminished in anesthesia. We estimated the state repertoire from dynamic analysis of intrinsic functional brain networks in conscious sedated and unconscious anesthetized rats. Functional resonance images were obtained from 30-min whole-brain resting-state blood oxygen level-dependent (BOLD) signals at propofol infusion rates of 20 and 40 mg/kg/h, intravenously. Dynamic brain networks were defined at the voxel level by sliding window analysis of regional homogeneity (ReHo) or coincident threshold crossings (CTC) of the BOLD signal acquired in nine sagittal slices. The state repertoire was characterized by the temporal variance of the number of voxels with significant ReHo or positive CTC. From low to high propofol dose, the temporal variances of ReHo and CTC were reduced by 78%±20% and 76%±20%, respectively. Both baseline and propofol-induced reduction of CTC temporal variance increased from lateral to medial position. Group analysis showed a 20% reduction in the number of unique states at the higher propofol dose. Analysis of temporal variance in 12 anatomically defined regions of interest predicted that the largest changes occurred in visual cortex, parietal cortex, and caudate-putamen. The results suggest that the repertoire of large-scale brain states derived from the spatiotemporal dynamics of intrinsic networks is substantially reduced at an anesthetic dose associated with loss of consciousness. PMID:24702200
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Reduction of variance in spectral estimates for correction of ultrasonic aberration.
Astheimer, Jeffrey P; Pilkington, Wayne C; Waag, Robert C
2006-01-01
A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
AN ASSESSMENT OF MCNP WEIGHT WINDOWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. HENDRICKS; C. N. CULBERTSON
2000-01-01
The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less
Direct simulation of compressible turbulence in a shear flow
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.
1991-01-01
The purpose of this study is to investigate compressibility effects on the turbulence in homogeneous shear flow. It is found that the growth of the turbulent kinetic energy decreases with increasing Mach number, a phenomenon similar to the reduction of turbulent velocity intensities observed in experiments on supersonic free shear layers. An examination of the turbulent energy budget shows that both the compressible dissipation and the pressure-dilatation contribute to the decrease in the growth of kinetic energy. The pressure-dilatation is predominantly negative in homogeneous shear flow, in contrast to its predominantly positive behavior in isotropic turbulence. The different signs of the pressure-dilatation are explained by theoretical consideration of the equations for the pressure variance and density variance.
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y
2012-01-01
A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditionalmore » Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Adaptive noise Wiener filter for scanning electron microscope imaging system.
Sim, K S; Teh, V; Nia, M E
2016-01-01
Noise on scanning electron microscope (SEM) images is studied. Gaussian noise is the most common type of noise in SEM image. We developed a new noise reduction filter based on the Wiener filter. We compared the performance of this new filter namely adaptive noise Wiener (ANW) filter, with four common existing filters as well as average filter, median filter, Gaussian smoothing filter and the Wiener filter. Based on the experiments results the proposed new filter has better performance on different noise variance comparing to the other existing noise removal filters in the experiments. © Wiley Periodicals, Inc.
Job Tasks as Determinants of Thoracic Aerosol Exposure in the Cement Production Industry.
Notø, Hilde; Nordby, Karl-Christian; Skare, Øivind; Eduard, Wijnand
2017-12-15
The aims of this study were to identify important determinants and investigate the variance components of thoracic aerosol exposure for the workers in the production departments of European cement plants. Personal thoracic aerosol measurements and questionnaire information (Notø et al., 2015) were the basis for this study. Determinants categorized in three levels were selected to describe the exposure relationships separately for the job types production, cleaning, maintenance, foreman, administration, laboratory, and other jobs by linear mixed models. The influence of plant and job determinants on variance components were explored separately and also combined in full models (plant&job) against models with no determinants (null). The best mixed models (best) describing the exposure for each job type were selected by the lowest Akaike information criterion (AIC; Akaike, 1974) after running all possible combination of the determinants. Tasks that significantly increased the thoracic aerosol exposure above the mean level for production workers were: packing and shipping, raw meal, cement and filter cleaning, and de-clogging of the cyclones. For maintenance workers, time spent with welding and dismantling before repair work increased the exposure while time with electrical maintenance and oiling decreased the exposure. Administration work decreased the exposure among foremen. A subjective tidiness factor scored by the research team explained up to a 3-fold (cleaners) variation in thoracic aerosol levels. Within-worker (WW) variance contained a major part of the total variance (35-58%) for all job types. Job determinants had little influence on the WW variance (0-4% reduction), some influence on the between-plant (BP) variance (from 5% to 39% reduction for production, maintenance, and other jobs respectively but an 79% increase for foremen) and a substantial influence on the between-worker within-plant variance (30-96% for production, foremen, and other workers). Plant determinants had little influence on the WW variance (0-2% reduction), some influence on the between-worker variance (0-1% reduction and 8% increase), and considerable influence on the BP variance (36-58% reduction) compared to the null models. Some job tasks contribute to low levels of thoracic aerosol exposure and others to higher exposure among cement plant workers. Thus, job task may predict exposure in this industry. Dust control measures in the packing and shipping departments and in the areas of raw meal and cement handling could contribute substantially to reduce the exposure levels. Rotation between low and higher exposed tasks may contribute to equalize the exposure levels between high and low exposed workers as a temporary solution before more permanent dust reduction measures is implemented. A tidy plant may reduce the overall exposure for almost all workers no matter of job type. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Rasper, Michael; Nadjiri, Jonathan; Sträter, Alexandra S; Settles, Marcus; Laugwitz, Karl-Ludwig; Rummeny, Ernst J; Huber, Armin M
2017-06-01
To prospectively compare image quality and myocardial T 1 relaxation times of modified Look-Locker inversion recovery (MOLLI) imaging at 3.0 T (T) acquired with patient-adaptive dual-source (DS) and conventional single-source (SS) radiofrequency (RF) transmission. Pre- and post-contrast MOLLI T 1 mapping using SS and DS was acquired in 27 patients. Patient wise and segment wise analysis of T 1 times was performed. The correlation of DS MOLLI measurements with a reference spin echo sequence was analysed in phantom experiments. DS MOLLI imaging reduced T 1 standard deviation in 14 out of 16 myocardial segments (87.5%). Significant reduction of T 1 variance could be obtained in 7 segments (43.8%). DS significantly reduced myocardial T 1 variance in 16 out of 25 patients (64.0%). With conventional RF transmission, dielectric shading artefacts occurred in six patients causing diagnostic uncertainty. No according artefacts were found on DS images. DS image findings were in accordance with conventional T 1 mapping and late gadolinium enhancement (LGE) imaging. Phantom experiments demonstrated good correlation of myocardial T 1 time between DS MOLLI and spin echo imaging. Dual-source RF transmission enhances myocardial T 1 homogeneity in MOLLI imaging at 3.0 T. The reduction of signal inhomogeneities and artefacts due to dielectric shading is likely to enhance diagnostic confidence.
A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less
NASA Astrophysics Data System (ADS)
Tjiputra, Jerry F.; Polzin, Dierk; Winguth, Arne M. E.
2007-03-01
An adjoint method is applied to a three-dimensional global ocean biogeochemical cycle model to optimize the ecosystem parameters on the basis of SeaWiFS surface chlorophyll observation. We showed with identical twin experiments that the model simulated chlorophyll concentration is sensitive to perturbation of phytoplankton and zooplankton exudation, herbivore egestion as fecal pellets, zooplankton grazing, and the assimilation efficiency parameters. The assimilation of SeaWiFS chlorophyll data significantly improved the prediction of chlorophyll concentration, especially in the high-latitude regions. Experiments that considered regional variations of parameters yielded a high seasonal variance of ecosystem parameters in the high latitudes, but a low variance in the tropical regions. These experiments indicate that the adjoint model is, despite the many uncertainties, generally capable to optimize sensitive parameters and carbon fluxes in the euphotic zone. The best fit regional parameters predict a global net primary production of 36 Pg C yr-1, which lies within the range suggested by Antoine et al. (1996). Additional constraints of nutrient data from the World Ocean Atlas showed further reduction in the model-data misfit and that assimilation with extensive data sets is necessary.
NASA Astrophysics Data System (ADS)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.
NASA Astrophysics Data System (ADS)
Yin, Shaohua; Lin, Guo; Li, Shiwei; Peng, Jinhui; Zhang, Libo
2016-09-01
Microwave heating has been applied in the field of drying rare earth carbonates to improve drying efficiency and reduce energy consumption. The effects of power density, material thickness and drying time on the weight reduction (WR) are studied using response surface methodology (RSM). The results show that RSM is feasible to describe the relationship between the independent variables and weight reduction. Based on the analysis of variance (ANOVA), the model is in accordance with the experimental data. The optimum experiment conditions are power density 6 w/g, material thickness 15 mm and drying time 15 min, resulting in an experimental weight reduction of 73%. Comparative experiments show that microwave drying has the advantages of rapid dehydration and energy conservation. Particle analysis shows that the size distribution of rare earth carbonates after microwave drying is more even than those in an oven. Based on these findings, microwave heating technology has an important meaning to energy-saving and improvement of production efficiency for rare earth smelting enterprises and is a green heating process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Peter; Varghese, Philip; Goldstein, David
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less
Automated variance reduction for MCNP using deterministic methods.
Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B
2005-01-01
In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
2017-04-06
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Control of large flexible structures - An experiment on the NASA Mini-Mast facility
NASA Technical Reports Server (NTRS)
Hsieh, Chen; Kim, Jae H.; Liu, Ketao; Zhu, Guoming; Skelton, Robert E.
1991-01-01
The output variance constraint controller design procedure is integrated with model reduction by modal cost analysis. A procedure is given for tuning MIMO controller designs to find the maximal rms performance of the actual system. Controller designs based on a finite-element model of the system are compared with controller designs based on an identified model (obtained using the Q-Markov Cover algorithm). The identified model and the finite-element model led to similar closed-loop performance, when tested in the Mini-Mast facility at NASA Langley.
Infrared Measurement Variability Analysis.
1980-09-01
collecting optics of the measurement system. The first equation for tile blackbody experiment has the form 4.0 pim _ Ae W ,T) r(X,D) 3.5 pm - 4.0 pm JrD2 f3.5...potential for noise reduction by identifying and reducing contributing system effects. The measurement variance ott . of an infinite population of possible...irradiance can be written 4.0 pm I r()A A+ A ) 2 4.0 X C1(, = W(XT + AT)d 3.5 pim I since c + Af =2 r +Ar I Using the two expressions juSt devclopCd
Statistical aspects of quantitative real-time PCR experiment design.
Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales
2010-04-01
Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Yi-Kang
2017-09-01
Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.
Multi-segmental movement patterns reflect juggling complexity and skill level.
Zago, Matteo; Pacifici, Ilaria; Lovecchio, Nicola; Galli, Manuela; Federolf, Peter Andreas; Sforza, Chiarella
2017-08-01
The juggling action of six experts and six intermediates jugglers was recorded with a motion capture system and decomposed into its fundamental components through Principal Component Analysis. The aim was to quantify trends in movement dimensionality, multi-segmental patterns and rhythmicity as a function of proficiency level and task complexity. Dimensionality was quantified in terms of Residual Variance, while the Relative Amplitude was introduced to account for individual differences in movement components. We observed that: experience-related modifications in multi-segmental actions exist, such as the progressive reduction of error-correction movements, especially in complex task condition. The systematic identification of motor patterns sensitive to the acquisition of specific experience could accelerate the learning process. Copyright © 2017 Elsevier B.V. All rights reserved.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
NASA Astrophysics Data System (ADS)
Ťupek, Boris; Launiainen, Samuli; Peltoniemi, Mikko; Heikkinen, Jukka; Lehtonen, Aleksi
2016-04-01
Litter decomposition rates of the most process based soil carbon models affected by environmental conditions are linked with soil heterotrophic CO2 emissions and serve for estimating soil carbon sequestration; thus due to the mass balance equation the variation in measured litter inputs and measured heterotrophic soil CO2 effluxes should indicate soil carbon stock changes, needed by soil carbon management for mitigation of anthropogenic CO2 emissions, if sensitivity functions of the applied model suit to the environmental conditions e.g. soil temperature and moisture. We evaluated the response forms of autotrophic and heterotrophic forest floor respiration to soil temperature and moisture in four boreal forest sites of the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) by a soil trenching experiment during year 2015 in southern Finland. As expected both autotrophic and heterotrophic forest floor respiration components were primarily controlled by soil temperature and exponential regression models generally explained more than 90% of the variance. Soil moisture regression models on average explained less than 10% of the variance and the response forms varied between Gaussian for the autotrophic forest floor respiration component and linear for the heterotrophic forest floor respiration component. Although the percentage of explained variance of soil heterotrophic respiration by the soil moisture was small, the observed reduction of CO2 emissions with higher moisture levels suggested that soil moisture response of soil carbon models not accounting for the reduction due to excessive moisture should be re-evaluated in order to estimate right levels of soil carbon stock changes. Our further study will include evaluation of process based soil carbon models by the annual heterotrophic respiration and soil carbon stocks.
Accounting for therapist variability in couple therapy outcomes: what really matters?
Owen, Jesse; Duncan, Barry; Reese, Robert Jeff; Anker, Morten; Sparks, Jacqueline
2014-01-01
This study examined whether therapist gender, professional discipline, experience conducting couple therapy, and average second-session alliance score would account for the variance in outcomes attributed to the therapist. The authors investigated therapist variability in couple therapy with 158 couples randomly assigned to and treated by 18 therapists in a naturalistic setting. Consistent with previous studies in individual therapy, in this study therapists accounted for 8.0% of the variance in client outcomes and 10% of the variance in client alliance scores. Therapist average alliance score and experience conducting couple therapy were salient predictors of client outcomes attributed to therapist. In contrast, therapist gender and discipline did not significantly account for the variance in client outcomes attributed to therapists. Tests of incremental validity demonstrated that therapist average alliance score and therapist experience uniquely accounted for the variance in outcomes attributed to the therapist. Emphasis on improving therapist alliance quality and specificity of therapist experience in couple therapy are discussed.
Mixed emotions: Sensitivity to facial variance in a crowd of faces.
Haberman, Jason; Lee, Pegan; Whitney, David
2015-01-01
The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.
Control Variate Estimators of Survivor Growth from Point Samples
Francis A. Roesch; Paul C. van Deusen
1993-01-01
Two estimators of the control variate type for survivor growth from remeasured point samples are proposed and compared with more familiar estimators. The large reductionsin variance, observed in many cases forestimators constructed with control variates, arealso realized in thisapplication. A simulation study yielded consistent reductions in variance which were often...
Comparison of noise reduction systems
NASA Astrophysics Data System (ADS)
Noel, S. D.; Whitaker, R. W.
1991-06-01
When using infrasound as a tool for verification, the most important measurement to determine yield has been the peak-to-peak pressure amplitude of the signal. Therefore, there is a need to operate at the most favorable signal-to-noise ratio (SNR) possible. Winds near the ground can degrade the SNR, thereby making accurate signal amplitude measurement difficult. Wind noise reduction techniques were developed to help alleviate this problem; however, a noise reducing system should reduce the noise, and should not introduce distortion of coherent signals. An experiment is described to study system response for a variety of noise reducing configurations to a signal generated by an underground test (UGT) at the Nevada Test Site (NTS). In addition to the signal, background noise reduction is examined through measurements of variance. Sensors using two particular geometries of noise reducing equipment, the spider and the cross appear to deliver the best SNR. Because the spider configuration is easier to deploy, it is now the most commonly used.
Monte Carlo isotopic inventory analysis for complex nuclear systems
NASA Astrophysics Data System (ADS)
Phruksarojanakun, Phiphat
Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.
Brain Mechanisms Supporting Modulation of Pain by Mindfulness Meditation
Zeidan, F.; Martucci, K.T.; Kraft, R.A.; Gordon, N.S.; McHaffie, J.G.; Coghill, R.C.
2011-01-01
The subjective experience of one’s environment is constructed by interactions among sensory, cognitive, and affective processes. For centuries, meditation has been thought to influence such processes by enabling a non-evaluative representation of sensory events. To better understand how meditation influences the sensory experience, we employed arterial spin labeling (ASL) functional magnetic resonance imaging to assess the neural mechanisms by which mindfulness meditation influences pain in healthy human participants. After four-days of mindfulness meditation training, meditating in the presence of noxious stimulation significantly reduced pain-unpleasantness by 57% and pain-intensity ratings by 40% when compared to rest. A two factor repeated measures analysis of variance was used to identify interactions between meditation and pain-related brain activation. Meditation reduced pain-related activation of the contra lateral primary somatosensory cortex. Multiple regression analysis was used to identify brain regions associated with individual differences in the magnitude of meditation-related pain reductions. Meditation-induced reductions in pain intensity ratings were associated with increased activity in the anterior cingulate cortex and anterior insula, areas involved in the cognitive regulation of nociceptive processing. Reductions in pain unpleasantness ratings were associated with orbitofrontal cortex activation, an area implicated in reframing the contextual evaluation of sensory events. Moreover, reductions in pain unpleasantness also were associated with thalamic deactivation, which may reflect a limbic gating mechanism involved in modifying interactions between afferent in put and executive-order brain areas. Taken together, these data indicate that meditation engages multiple brain mechanisms that alter the construction of the subjectively available pain experience from afferent information. PMID:21471390
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Predictors of burnout among correctional mental health professionals.
Gallavan, Deanna B; Newman, Jody L
2013-02-01
This study focused on the experience of burnout among a sample of correctional mental health professionals. We examined the relationship of a linear combination of optimism, work family conflict, and attitudes toward prisoners with two dimensions derived from the Maslach Burnout Inventory and the Professional Quality of Life Scale. Initially, three subscales from the Maslach Burnout Inventory and two subscales from the Professional Quality of Life Scale were subjected to principal components analysis with oblimin rotation in order to identify underlying dimensions among the subscales. This procedure resulted in two components accounting for approximately 75% of the variance (r = -.27). The first component was labeled Negative Experience of Work because it seemed to tap the experience of being emotionally spent, detached, and socially avoidant. The second component was labeled Positive Experience of Work and seemed to tap a sense of competence, success, and satisfaction in one's work. Two multiple regression analyses were subsequently conducted, in which Negative Experience of Work and Positive Experience of Work, respectively, were predicted from a linear combination of optimism, work family conflict, and attitudes toward prisoners. In the first analysis, 44% of the variance in Negative Experience of Work was accounted for, with work family conflict and optimism accounting for the most variance. In the second analysis, 24% of the variance in Positive Experience of Work was accounted for, with optimism and attitudes toward prisoners accounting for the most variance.
Analysis of Wind Tunnel Polar Replicates Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard; Micol, John R.
2010-01-01
The role of variance in a Modern Design of Experiments analysis of wind tunnel data is reviewed, with distinctions made between explained and unexplained variance. The partitioning of unexplained variance into systematic and random components is illustrated, with examples of the elusive systematic component provided for various types of real-world tests. The importance of detecting and defending against systematic unexplained variance in wind tunnel testing is discussed, and the random and systematic components of unexplained variance are examined for a representative wind tunnel data set acquired in a test in which a missile is used as a test article. The adverse impact of correlated (non-independent) experimental errors is described, and recommendations are offered for replication strategies that facilitate the quantification of random and systematic unexplained variance.
Parametric Cooling of Ultracold Atoms
NASA Astrophysics Data System (ADS)
Boguslawski, Matthew; Bharath, H. M.; Barrios, Maryrose; Chapman, Michael
2017-04-01
An oscillator is characterized by a restoring force which determines the natural frequency at which oscillations occur. The amplitude and phase-noise of these oscillations can be amplified or squeezed by modulating the magnitude of this force (e.g. the stiffness of the spring) at twice the natural frequency. This is parametric excitation; a long-studied phenomena in both the classical and quantum regimes. Parametric cooling, or the parametric squeezing of thermo-mechanical noise in oscillators has been studied in micro-mechanical oscillators and trapped ions. We study parametric cooling in ultracold atoms. This method shows a modest reduction of the variance of atomic momenta, and can be easily employed with pre-existing controls in many experiments. Parametric cooling is comparable to delta-kicked cooling, sharing similar limitations. We expect this cooling to find utility in microgravity experiments where the experiment duration is limited by atomic free expansion.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.
NASA Astrophysics Data System (ADS)
Liu, Yahui; Fan, Xiaoqian; Lv, Chen; Wu, Jian; Li, Liang; Ding, Dawei
2018-02-01
Information fusion method of INS/GPS navigation system based on filtering technology is a research focus at present. In order to improve the precision of navigation information, a navigation technology based on Adaptive Kalman Filter with attenuation factor is proposed to restrain noise in this paper. The algorithm continuously updates the measurement noise variance and processes noise variance of the system by collecting the estimated and measured values, and this method can suppress white noise. Because a measured value closer to the current time would more accurately reflect the characteristics of the noise, an attenuation factor is introduced to increase the weight of the current value, in order to deal with the noise variance caused by environment disturbance. To validate the effectiveness of the proposed algorithm, a series of road tests are carried out in urban environment. The GPS and IMU data of the experiments were collected and processed by dSPACE and MATLAB/Simulink. Based on the test results, the accuracy of the proposed algorithm is 20% higher than that of a traditional Adaptive Kalman Filter. It also shows that the precision of the integrated navigation can be improved due to the reduction of the influence of environment noise.
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Income distribution dependence of poverty measure: A theoretical analysis
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Mallick, Sushanta K.
2007-04-01
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less
Bias correction of daily satellite precipitation data using genetic algorithm
NASA Astrophysics Data System (ADS)
Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.
2018-05-01
Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.
Mutilating Data and Discarding Variance: The Dangers of Dichotomizing Continuous Variables.
ERIC Educational Resources Information Center
Kroff, Michael W.
This paper reviews issues involved in converting continuous variables to nominal variables to be used in the OVA techniques. The literature dealing with the dangers of dichotomizing continuous variables is reviewed. First, the assumptions invoked by OVA analyses are reviewed in addition to concerns regarding the loss of variance and a reduction in…
Control Variates and Optimal Designs in Metamodeling
2013-03-01
27 2.4.5 Selection of Control Variates for Inclusion in Model...meet the normality assumption (Nelson 1990, Nelson and Yang 1992, Anonuevo and Nelson 1988). Jacknifing, splitting, and bootstrapping can be used to...freedom to estimate the variance are lost due to being used for the control variate inclusion . This means the variance reduction achieved must now be
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Risk and the evolution of human exchange.
Kaplan, Hillard S; Schniter, Eric; Smith, Vernon L; Wilson, Bart J
2012-08-07
Compared with other species, exchange among non-kin is a hallmark of human sociality in both the breadth of individuals and total resources involved. One hypothesis is that extensive exchange evolved to buffer the risks associated with hominid dietary specialization on calorie dense, large packages, especially from hunting. 'Lucky' individuals share food with 'unlucky' individuals with the expectation of reciprocity when roles are reversed. Cross-cultural data provide prima facie evidence of pair-wise reciprocity and an almost universal association of high-variance (HV) resources with greater exchange. However, such evidence is not definitive; an alternative hypothesis is that food sharing is really 'tolerated theft', in which individuals possessing more food allow others to steal from them, owing to the threat of violence from hungry individuals. Pair-wise correlations may reflect proximity providing greater opportunities for mutual theft of food. We report a laboratory experiment of foraging and food consumption in a virtual world, designed to test the risk-reduction hypothesis by determining whether people form reciprocal relationships in response to variance of resource acquisition, even when there is no external enforcement of any transfer agreements that might emerge. Individuals can forage in a high-mean, HV patch or a low-mean, low-variance (LV) patch. The key feature of the experimental design is that individuals can transfer resources to others. We find that sharing hardly occurs after LV foraging, but among HV foragers sharing increases dramatically over time. The results provide strong support for the hypothesis that people are pre-disposed to evaluate gains from exchange and respond to unsynchronized variance in resource availability through endogenous reciprocal trading relationships.
Adaptive use of research aircraft data sets for hurricane forecasts
NASA Astrophysics Data System (ADS)
Biswas, M. K.; Krishnamurti, T. N.
2008-02-01
This study uses an adaptive observational strategy for hurricane forecasting. It shows the impacts of Lidar Atmospheric Sensing Experiment (LASE) and dropsonde data sets from Convection and Moisture Experiment (CAMEX) field campaigns on hurricane track and intensity forecasts. The following cases are used in this study: Bonnie, Danielle and Georges of 1998 and Erin, Gabrielle and Humberto of 2001. A single model run for each storm is carried out using the Florida State University Global Spectral Model (FSUGSM) with the European Center for Medium Range Weather Forecasts (ECMWF) analysis as initial conditions, in addition to 50 other model runs where the analysis is randomly perturbed for each storm. The centers of maximum variance of the DLM heights are located from the forecast error variance fields at the 84-hr forecast. Back correlations are then performed using the centers of these maximum variances and the fields at the 36-hr forecast. The regions having the highest correlations in the vicinity of the hurricanes are indicative of regions from where the error growth emanates and suggests the need for additional observations. Data sets are next assimilated in those areas that contain high correlations. Forecasts are computed using the new initial conditions for the storm cases, and track and intensity skills are then examined with respect to the control forecast. The adaptive strategy is capable of identifying sensitive areas where additional observations can help in reducing the hurricane track forecast errors. A reduction of position error by approximately 52% for day 3 of forecast (averaged over 7 storm cases) over the control runs is observed. The intensity forecast shows only a slight positive impact due to the model’s coarse resolution.
Yakobov, Esther; Scott, Whitney; Stanish, William D; Tanzer, Michael; Dunbar, Michael; Richardson, Glen; Sullivan, Michael J L
2018-05-01
Perceptions of injustice have been associated with problematic recovery outcomes in individuals with a wide range of debilitating pain conditions. It has been suggested that, in patients with chronic pain, perceptions of injustice might arise in response to experiences characterized by illness-related pain severity, depressive symptoms, and disability. If symptoms severity and disability are important contributors to perceived injustice (PI), it follows that interventions that yield reductions in symptom severity and disability should also contribute to reductions in perceptions of injustice. The present study examined the relative contributions of postsurgical reductions in pain severity, depressive symptoms, and disability to the prediction of reductions in perceptions of injustice. The study sample consisted of 110 individuals (69 women and 41 men) with osteoarthritis of the knee scheduled for total knee arthroplasty (TKA). Patients completed measures of perceived injustice, depressive symptoms, pain, and disability at their presurgical evaluation, and at 1-year follow-up. The results revealed that reductions in depressive symptoms and disability, but not pain severity, were correlated with reductions in perceived injustice. Regression analyses revealed that reductions in disability and reductions in depressive symptoms contributed modest but significant unique variance to the prediction of postsurgical reductions in perceived injustice. The present findings are consistent with current conceptualizations of injustice appraisals that propose a central role for symptom severity and disability as determinants of perceptions of injustice in patients with persistent pain. The results suggest that the inclusion of psychosocial interventions that target depressive symptoms and perceived injustice might augment the impact of rehabilitation programs made available for individuals recovering from TKA.
MPF Top-Mast Measured Temperature
1997-10-14
This temperature figure shows the change in the mean and variance of the temperature fluctuations at the Pathfinder landing site. Sol 79 and 80 are very similar, with a significant reduction of the mean and variance on Sol 81. The science team suspects that a cold front has past of the landing sight between Sols 80 and 81. http://photojournal.jpl.nasa.gov/catalog/PIA00978
Schmid, Patrick; Yao, Hui; Galdzicki, Michal; Berger, Bonnie; Wu, Erxi; Kohane, Isaac S.
2009-01-01
Background Although microarray technology has become the most common method for studying global gene expression, a plethora of technical factors across the experiment contribute to the variable of genome gene expression profiling using peripheral whole blood. A practical platform needs to be established in order to obtain reliable and reproducible data to meet clinical requirements for biomarker study. Methods and Findings We applied peripheral whole blood samples with globin reduction and performed genome-wide transcriptome analysis using Illumina BeadChips. Real-time PCR was subsequently used to evaluate the quality of array data and elucidate the mode in which hemoglobin interferes in gene expression profiling. We demonstrated that, when applied in the context of standard microarray processing procedures, globin reduction results in a consistent and significant increase in the quality of beadarray data. When compared to their pre-globin reduction counterparts, post-globin reduction samples show improved detection statistics, lowered variance and increased sensitivity. More importantly, gender gene separation is remarkably clearer in post-globin reduction samples than in pre-globin reduction samples. Our study suggests that the poor data obtained from pre-globin reduction samples is the result of the high concentration of hemoglobin derived from red blood cells either interfering with target mRNA binding or giving the pseudo binding background signal. Conclusion We therefore recommend the combination of performing globin mRNA reduction in peripheral whole blood samples and hybridizing on Illumina BeadChips as the practical approach for biomarker study. PMID:19381341
Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.
Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E
2017-12-11
Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.
The Variance of Intraclass Correlations in Three- and Four-Level Models
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in populations with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomassen, Mads; Skov, Vibe; Eiriksdottir, Freyja
2006-06-16
The quality of DNA microarray based gene expression data relies on the reproducibility of several steps in a microarray experiment. We have developed a spotted genome wide microarray chip with oligonucleotides printed in duplicate in order to minimise undesirable biases, thereby optimising detection of true differential expression. The validation study design consisted of an assessment of the microarray chip performance using the MessageAmp and FairPlay labelling kits. Intraclass correlation coefficient (ICC) was used to demonstrate that MessageAmp was significantly more reproducible than FairPlay. Further examinations with MessageAmp revealed the applicability of the system. The linear range of the chips wasmore » three orders of magnitude, the precision was high, as 95% of measurements deviated less than 1.24-fold from the expected value, and the coefficient of variation for relative expression was 13.6%. Relative quantitation was more reproducible than absolute quantitation and substantial reduction of variance was attained with duplicate spotting. An analysis of variance (ANOVA) demonstrated no significant day-to-day variation.« less
Direct simulation of compressible turbulence in a shear flow
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.
1991-01-01
Compressibility effects on the turbulence in homogeneous shear flow are investigated. The growth of the turbulent kinetic energy was found to decrease with increasing Mach number: a phenomenon which is similar to the reduction of turbulent velocity intensities observed in experiments on supersonic free shear layers. An examination of the turbulent energy budget shows that both the compressible dissipation and the pressure-dilatation contribute to the decrease in the growth of kinetic energy. The pressure-dilatation is predominantly negative in homogeneous shear flow, in contrast to its predominantly positive behavior in isotropic turbulence. The different signs of the pressure-dilatation are explained by theoretical consideration of the equations for the pressure variance and density variance. Previously, the following results were obtained for isotropic turbulence: (1) the normalized compressible dissipation is of O(M(sub t)(exp 2)); and (2) there is approximate equipartition between the kinetic and potential energies associated with the fluctuating compressible mode. Both of these results were substantiated in the case of homogeneous shear. The dilatation field is significantly more skewed and intermittent than the vorticity field. Strong compressions seem to be more likely than strong expansions.
Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models
2008-08-01
Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
Discrete filtering techniques applied to sequential GPS range measurements
NASA Technical Reports Server (NTRS)
Vangraas, Frank
1987-01-01
The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.
Analytic variance estimates of Swank and Fano factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
Mudallal, Rola H; Othman, Wafa'a M; Al Hassan, Nahid F
2017-01-01
Nurse burnout is a widespread phenomenon characterized by a reduction in nurses' energy that manifests in emotional exhaustion, lack of motivation, and feelings of frustration and may lead to reductions in work efficacy. This study was conducted to assess the level of burnout among Jordanian nurses and to investigate the influence of leader empowering behaviors (LEBs) on nurses' feelings of burnout in an endeavor to improve nursing work outcomes. A cross-sectional and correlational design was used. Leader Empowering Behaviors Scale and the Maslach Burnout Inventory (MBI) were employed to collect data from 407 registered nurses, recruited from 11 hospitals in Jordan. The Jordanian nurses exhibited high levels of burnout as demonstrated by their high scores for Emotional Exhaustion (EE) and Depersonalization (DP) and moderate scores for Personal Accomplishment (PA). Factors related to work conditions, nurses' demographic traits, and LEBs were significantly correlated with the burnout categories. A stepwise regression model-exposed 4 factors predicted EE: hospital type, nurses' work shift, providing autonomy, and fostering participation in decision making. Gender, fostering participation in decision making, and department type were responsible for 5.9% of the DP variance, whereas facilitating goal attainment and nursing experience accounted for 8.3% of the PA variance. This study highlights the importance of the role of nurse leaders in improving work conditions and empowering and motivating nurses to decrease nurses' feelings of burnout, reduce turnover rates, and improve the quality of nursing care.
Mudallal, Rola H.; Othman, Wafa’a M.; Al Hassan, Nahid F.
2017-01-01
Nurse burnout is a widespread phenomenon characterized by a reduction in nurses’ energy that manifests in emotional exhaustion, lack of motivation, and feelings of frustration and may lead to reductions in work efficacy. This study was conducted to assess the level of burnout among Jordanian nurses and to investigate the influence of leader empowering behaviors (LEBs) on nurses’ feelings of burnout in an endeavor to improve nursing work outcomes. A cross-sectional and correlational design was used. Leader Empowering Behaviors Scale and the Maslach Burnout Inventory (MBI) were employed to collect data from 407 registered nurses, recruited from 11 hospitals in Jordan. The Jordanian nurses exhibited high levels of burnout as demonstrated by their high scores for Emotional Exhaustion (EE) and Depersonalization (DP) and moderate scores for Personal Accomplishment (PA). Factors related to work conditions, nurses’ demographic traits, and LEBs were significantly correlated with the burnout categories. A stepwise regression model–exposed 4 factors predicted EE: hospital type, nurses’ work shift, providing autonomy, and fostering participation in decision making. Gender, fostering participation in decision making, and department type were responsible for 5.9% of the DP variance, whereas facilitating goal attainment and nursing experience accounted for 8.3% of the PA variance. This study highlights the importance of the role of nurse leaders in improving work conditions and empowering and motivating nurses to decrease nurses’ feelings of burnout, reduce turnover rates, and improve the quality of nursing care. PMID:28844166
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Gibbs, Jeremy J; Rice, Eric
2016-01-01
The purpose of this study was to understand which social context factors most influence depression symptomology among sexual minority male youth (SMMY). In 2011, 195 SMMY who use Grindr were recruited to complete an online survey in Los Angeles, California. Items focused on social context variables and depression symptomology. Hierarchical multiple regressions were conducted using an ecological framework. The best fitting model accounted for 29.5% of the variance in depression. Experiences of homophobia, gay community connection, presence of an objecting network member, and emotional support were found to be significant predictors. Past experiences of homophobia continuing to affect youth indicates the need for intervention to reduction of homophobia in youths' social contexts. Interventions that teach youth skills to manage objecting viewpoints or help youth to reorganize their social networks may help to reduce the impact of an objecting network alter.
Risk and the evolution of human exchange
Kaplan, Hillard S.; Schniter, Eric; Smith, Vernon L.; Wilson, Bart J.
2012-01-01
Compared with other species, exchange among non-kin is a hallmark of human sociality in both the breadth of individuals and total resources involved. One hypothesis is that extensive exchange evolved to buffer the risks associated with hominid dietary specialization on calorie dense, large packages, especially from hunting. ‘Lucky’ individuals share food with ‘unlucky’ individuals with the expectation of reciprocity when roles are reversed. Cross-cultural data provide prima facie evidence of pair-wise reciprocity and an almost universal association of high-variance (HV) resources with greater exchange. However, such evidence is not definitive; an alternative hypothesis is that food sharing is really ‘tolerated theft’, in which individuals possessing more food allow others to steal from them, owing to the threat of violence from hungry individuals. Pair-wise correlations may reflect proximity providing greater opportunities for mutual theft of food. We report a laboratory experiment of foraging and food consumption in a virtual world, designed to test the risk-reduction hypothesis by determining whether people form reciprocal relationships in response to variance of resource acquisition, even when there is no external enforcement of any transfer agreements that might emerge. Individuals can forage in a high-mean, HV patch or a low-mean, low-variance (LV) patch. The key feature of the experimental design is that individuals can transfer resources to others. We find that sharing hardly occurs after LV foraging, but among HV foragers sharing increases dramatically over time. The results provide strong support for the hypothesis that people are pre-disposed to evaluate gains from exchange and respond to unsynchronized variance in resource availability through endogenous reciprocal trading relationships. PMID:22513855
Lifelong Bilingualism Maintains Neural Efficiency for Cognitive Control in Aging
Gold, Brian T.; Kim, Chobok; Johnson, Nathan F.; Kryscio, Richard J.; Smith, Charles D.
2013-01-01
Recent behavioral data have shown that lifelong bilingualism can maintain youthful cognitive control abilities in aging. Here, we provide the first direct evidence of a neural basis for the bilingual cognitive control boost in aging. Two experiments were conducted, using a perceptual task switching paradigm, and including a total of 110 participants. In Experiment 1, older adult bilinguals showed better perceptual switching performance than their monolingual peers. In Experiment 2, younger and older adult monolinguals and bilinguals completed the same perceptual task switching experiment while fMRI was performed. Typical age-related performance reductions and fMRI activation increases were observed. However, like younger adults, bilingual older adults outperformed their monolingual peers while displaying decreased activation in left lateral frontal cortex and cingulate cortex. Critically, this attenuation of age-related over-recruitment associated with bilingualism was directly correlated with better task switching performance. In addition, the lower BOLD response in frontal regions accounted for 82% of the variance in the bilingual task switching reaction time advantage. These results suggest that lifelong bilingualism offsets age-related declines in the neural efficiency for cognitive control processes. PMID:23303919
Berger, Philip; Messner, Michael J; Crosby, Jake; Vacs Renwick, Deborah; Heinrich, Austin
2018-05-01
Spore reduction can be used as a surrogate measure of Cryptosporidium natural filtration efficiency. Estimates of log10 (log) reduction were derived from spore measurements in paired surface and well water samples in Casper Wyoming and Kearney Nebraska. We found that these data were suitable for testing the hypothesis (H 0 ) that the average reduction at each site was 2 log or less, using a one-sided Student's t-test. After establishing data quality objectives for the test (expressed as tolerable Type I and Type II error rates), we evaluated the test's performance as a function of the (a) true log reduction, (b) number of paired samples assayed and (c) variance of observed log reductions. We found that 36 paired spore samples are sufficient to achieve the objectives over a wide range of variance, including the variances observed in the two data sets. We also explored the feasibility of using smaller numbers of paired spore samples to supplement bioparticle counts for screening purposes in alluvial aquifers, to differentiate wells with large volume surface water induced recharge from wells with negligible surface water induced recharge. With key assumptions, we propose a normal statistical test of the same hypothesis (H 0 ), but with different performance objectives. As few as six paired spore samples appear adequate as a screening metric to supplement bioparticle counts to differentiate wells in alluvial aquifers with large volume surface water induced recharge. For the case when all available information (including failure to reject H 0 based on the limited paired spore data) leads to the conclusion that wells have large surface water induced recharge, we recommend further evaluation using additional paired biweekly spore samples. Published by Elsevier GmbH.
Convergent evolution of reduced energy demands in extremophile fish
Arias-Rodriguez, Lenin; Tobler, Michael
2017-01-01
Convergent evolution in organismal function can arise from nonconvergent changes in traits that contribute to that function. Theory predicts that low resource availability and high maintenance costs in extreme environments select for reductions in organismal energy demands, which could be attained through modifications of body size or metabolic rate. We tested for convergence in energy demands and underlying traits by investigating livebearing fish (genus Poecilia) that have repeatedly colonized toxic, hydrogen sulphide-rich springs. We quantified variation in body size and routine metabolism across replicated sulphidic and non-sulphidic populations in nature, modelled total organismal energy demands, and conducted a common-garden experiment to test whether population differences had a genetic basis. Sulphidic populations generally exhibited smaller body sizes and lower routine metabolic rates compared to non-sulphidic populations, which together caused significant reductions in total organismal energy demands in extremophile populations. Although both mechanisms contributed to variation in organismal energy demands, variance partitioning indicated reductions of body size overall had a greater effect than reductions of routine metabolism. Finally, population differences in routine metabolism documented in natural populations were maintained in common-garden reared individuals, indicating evolved differences. In combination with other studies, these results suggest that reductions in energy demands may represent a common theme in adaptation to physiochemical stressors. Selection for reduced energy demand may particularly affect body size, which has implications for life history evolution in extreme environments. PMID:29077740
Describing Chinese hospital activity with diagnosis related groups (DRGs). A case study in Chengdu.
Gong, Zhiping; Duckett, Stephen J; Legge, David G; Pei, Likun
2004-07-01
To examine the applicability of an Australian casemix classification system to the description of Chinese hospital activity. A total of 161,478 inpatient episodes from three Chengdu hospitals with demographic, diagnosis, procedure and billing data for the year 1998/1999, 1999/2000 and 2000/2001 were grouped using the Australian refined-diagnosis related groups (AR-DRGs) (version 4.0) grouper. Reduction in variance (R2) and coefficient of variation (CV). Untrimmed reduction in variance (R2) was 0.12 and 0.17 for length of stay (LOS) and cost respectively. After trimming, R2 values were 0.45 and 0.59 for length of stay and cost respectively. The Australian refined DRGs provide a good basis for developing a Chinese grouper.
[Locked volar plating for complex distal radius fractures: maintaining radial length].
Jeudy, J; Pernin, J; Cronier, P; Talha, A; Massin, P
2007-09-01
Maintaining radial length, likely to be the main challenge in the treatment of complex distal radius fractures, is necessary for complete grip-strength and pro-supination range recovery. In spite of frequent secondary displacements, bridging external-fixation has remained the reference method, either isolated or in association with additional percutaneous pins or volar plating. Also, there seems to be a relation between algodystrophy and the duration of traction applied on the radio-carpal joint. Fixed-angle volar plating offers the advantage of maintaining the reduction until fracture healing, without bridging the joint. In a prospective study, forty-three consecutive fractures of the distal radius with a positivated ulnar variance were treated with open reduction and fixed-angle volar plating. Results were assessed with special attention to the radial length and angulation obtained and maintained throughout treatment, based on repeated measurements of the ulnar variance and radial angulation in the first six months postoperatively. The correction of the ulnar variance was maintained until complete recovery, independently of initial metaphyseal comminution, and of the amount of radial length gained at reduction. Only 3 patients lost more than 1 mm of radial length after reduction. The posterior tilt of the distal radial epiphysis was incompletely reduced in 13 cases, whereas reduction was partially lost in 6 elderly osteoporotic female patients. There was 8 articular malunions, all of them less than 2 mm. Secondary displacements were found to be related to a deficient locking technique. Eight patients developed an algodystropy. The risk factors for algodystrophy were articular malunion, associated posterior pining, and associated lesions of the ipsilateral upper limb. Provided that the locking technique was correct, this type of fixation appeared efficient in maintaining the radial length in complex fractures of the distal radius. The main challenge remains the reduction of displaced articular fractures. Based on these results, it is not possible to conclude that this method is superior to external fixation.
Khan, Anzalee; Keefe, Richard S. E.
2017-01-01
Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as “avolition and anhedonia,” specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes. PMID:29410933
NASA Astrophysics Data System (ADS)
Teng, Fei; Jin, Jing; Li, Yong; Zhang, Chunxi
2018-05-01
The contribution of modulator drive circuit noise as a 1/f noise source to the output noise of the high-sensitivity interferometric fiber optic gyroscope (IFOG) was studied here. A noise model of closed-loop IFOG was built. By applying the simulated 1/f noise sequence into the model, a gyroscope output data series was acquired, and the corresponding power spectrum density (PSD) and the Allan variance curve were calculated to analyze the noise characteristic. The PSD curve was in the spectral shape of 1/f, which verifies that the modulator drive circuit induced a low frequency 1/f phase noise into the gyroscope. The random walk coefficient (RWC), a standard metric to characterize the noise performance of the IFOG, was calculated according to the Allan variance curve. Using an operational amplifier with an input 1/f noise of 520 nV/√Hz at 1 Hz, the RWC induced by this 1/f noise was 2 × 10-4°/√h, which accounts for 63% of the total RWC. To verify the correctness of the noise model we proposed, a high-sensitivity gyroscope prototype was built and tested. The simulated Allan variance curve gave a good rendition of the prototype actual measured curve. The error percentage between the simulated RWC and the measured value was less than 13%. According to the model, a noise reduction method is proposed and the effectiveness is verified by the experiment.
Massah, Omid; Sohrabi, Faramarz; A'azami, Yousef; Doostian, Younes; Farhoudian, Ali; Daneshmand, Reza
2016-03-01
Emotion plays an important role in adapting to life changes and stressful events. Difficulty regulating emotions is one of the problems drug abusers often face, and teaching these individuals to express and manage their emotions can be effective on improving their difficult circumstances. The present study aimed to determine the effectiveness of the Gross model-based emotion regulation strategies training on anger reduction in drug-dependent individuals. The present study had a quasi-experimental design wherein pretest-posttest evaluations were applied using a control group. The population under study included addicts attending Marivan's methadone maintenance therapy centers in 2012 - 2013. Convenience sampling was used to select 30 substance-dependent individuals undergoing maintenance treatment who were then randomly assigned to the experiment and control groups. The experiment group received its training in eight two-hour sessions. Data were analyzed using analysis of co-variance and paired t-test. There was significant reduction in anger symptoms of drug-dependent individuals after gross model based emotion regulation training (ERT) (P < 0.001). Moreover, the effectiveness of the training on anger was persistent in the follow-up period. Symptoms of anger in drug-dependent individuals of this study were reduced by gross model-based emotion regulation strategies training. Based on the results of this study, we may conclude that the gross model based emotion regulation strategies training can be applied alongside other therapies to treat drug abusers undergoing rehabilitation.
Three Tests and Three Corrections: Comment on Koen and Yonelinas (2010)
ERIC Educational Resources Information Center
Jang, Yoonhee; Mickes, Laura; Wixted, John T.
2012-01-01
The slope of the z-transformed receiver-operating characteristic (zROC) in recognition memory experiments is usually less than 1, which has long been interpreted to mean that the variance of the target distribution is greater than the variance of the lure distribution. The greater variance of the target distribution could arise because the…
A validation study of public health knowledge, skills, social responsibility and applied learning.
Vackova, Dana; Chen, Coco K; Lui, Juliana N M; Johnston, Janice M
2018-06-22
To design and validate a questionnaire to measure medical students' Public Health (PH) knowledge, skills, social responsibility and applied learning as indicated in the four domains recommended by the Association of Schools & Programmes of Public Health (ASPPH). A cross-sectional study was conducted to develop an evaluation tool for PH undergraduate education through item generation, reduction, refinement and validation. The 74 preliminary items derived from the existing literature were reduced to 55 items based on expert panel review which included those with expertise in PH, psychometrics and medical education, as well as medical students. Psychometric properties of the preliminary questionnaire were assessed as follows: frequency of endorsement for item variance; principal component analysis (PCA) with varimax rotation for item reduction and factor estimation; Cronbach's Alpha, item-total correlation and test-retest validity for internal consistency and reliability. PCA yielded five factors: PH Learning Experience (6 items); PH Risk Assessment and Communication (5 items); Future Use of Evidence in Practice (6 items); Recognition of PH as a Scientific Discipline (4 items); and PH Skills Development (3 items), explaining 72.05% variance. Internal consistency and reliability tests were satisfactory (Cronbach's Alpha ranged from 0.87 to 0.90; item-total correlation > 0.59). Lower paired test-retest correlations reflected instability in a social science environment. An evaluation tool for community-centred PH education has been developed and validated. The tool measures PH knowledge, skills, social responsibilities and applied learning as recommended by the internationally recognised Association of Schools & Programmes of Public Health (ASPPH).
Schiebener, Johannes; Brand, Matthias
2017-06-01
Previous literature has explained older individuals' disadvantageous decision-making under ambiguity in the Iowa Gambling Task (IGT) by reduced emotional warning signals preceding decisions. We argue that age-related reductions in IGT performance may also be explained by reductions in certain cognitive abilities (reasoning, executive functions). In 210 participants (18-86 years), we found that the age-related variance on IGT performance occurred only in the last 60 trials. The effect was mediated by cognitive abilities and their relation with decision-making performance under risk with explicit rules (Game of Dice Task). Thus, reductions in cognitive functions in older age may be associated with both a reduced ability to gain explicit insight into the rules of the ambiguous decision situation and with failure to choose the less risky options consequently after the rules have been understood explicitly. Previous literature may have underestimated the relevance of cognitive functions for age-related decline in decision-making performance under ambiguity.
NASA Astrophysics Data System (ADS)
Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.
2016-02-01
Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.
Average properties of bidisperse bubbly flows
NASA Astrophysics Data System (ADS)
Serrano-García, J. C.; Mendez-Díaz, S.; Zenit, R.
2018-03-01
Experiments were performed in a vertical channel to study the properties of a bubbly flow composed of two distinct bubble size species. Bubbles were produced using a capillary bank with tubes with two distinct inner diameters; the flow through each capillary size was controlled such that the amount of large or small bubbles could be controlled. Using water and water-glycerin mixtures, a wide range of Reynolds and Weber number ranges were investigated. The gas volume fraction ranged between 0.5% and 6%. The measurements of the mean bubble velocity of each species and the liquid velocity variance were obtained and contrasted with the monodisperse flows with equivalent gas volume fractions. We found that the bidispersity can induce a reduction of the mean bubble velocity of the large species; for the small size species, the bubble velocity can be increased, decreased, or remain unaffected depending of the flow conditions. The liquid velocity variance of the bidisperse flows is, in general, bound by the values of the small and large monodisperse values; interestingly, in some cases, the liquid velocity fluctuations can be larger than either monodisperse case. A simple model for the liquid agitation for bidisperse flows is proposed, with good agreement with the experimental measurements.
2008-09-15
however, a variety of so-called variance-reduction techniques ( VRTs ) that have been developed, which reduce output variance with little or no...additional computational effort. VRTs typically achieve this via judicious and careful reuse of the basic underlying random nmnbers. Perhaps the best-known...typical simulation situation- change a weapons-system configuration and see what difference it makes). Key to making CRN and most other VRTs work
Carlson, Eve B.; Palmieri, Patrick A.; Field, Nigel P.; Dalenberg, Constance J.; Macia, Kathryn S.; Spain, David A.
2016-01-01
Objective Traumatic experiences cause considerable suffering and place a burden on society due to lost productivity, increases in suicidality, violence, criminal behavior, and psychological disorder. The impact of traumatic experiences is complicated because many factors affect individuals’ responses. By employing several methodological improvements, we sought to identify risk factors that would account for a greater proportion of variance in later disorder than prior studies. Method In a sample of 129 traumatically injured hospital patients and family members of injured patients, we studied pre-trauma, time of trauma, and post-trauma psychosocial risk and protective factors hypothesized to influence responses to traumatic experiences and posttraumatic (PT) symptoms (including symptoms of PTSD, depression, negative thinking, and dissociation) two months after trauma. Results The risk factors were all significantly correlated with later PT symptoms, with post-trauma life stress, post-trauma social support, and acute stress symptoms showing the strongest relationships. A hierarchical regression, in which the risk factors were entered in 6 steps based on their occurrence in time, showed the risks accounted for 72% of the variance in later symptoms. Most of the variance in PT symptoms was shared among many risk factors, and pre-trauma and post-trauma risk factors accounted for the most variance. Conclusions Collectively, the risk factors accounted for more variance in later PT symptoms than in previous studies. These risk factors may identify individuals at risk for PT psychological disorders and targets for treatment. PMID:27423351
Patch dynamics of a foraging assemblage of bees.
Wright, David Hamilton
1985-03-01
The composition and dynamics of foraging assemblages of bees were examined from the standpoint of species-level arrival and departure processes in patches of flowers. Experiments with bees visiting 4 different species of flowers in subalpine meadows in Colorado gave the following results: 1) In enriched patches the rates of departure of bees were reduced, resulting in increases in both the number of bees per species and the average number of species present. 2) The reduction in bee departure rates from enriched patches was due to mechanical factors-increased flower handling time, and to behavioral factors-an increase in the number of flowers visited per inflorescence and in the number of inflorescences visited per patch. Bees foraging in enriched patches could collect nectar 30-45% faster than those foraging in control patches. 3) The quantitative changes in foraging assemblages due to enrichment, in terms of means and variances of species population sizes, fraction of time a species was present in a patch, and in mean and variance of the number of species present, were in reasonable agreement with predictions drawn from queuing theory and studies in island biogeography. 4) Experiments performed with 2 species of flowers with different corolla tube lengths demonstrated that manipulation of resources of differing availability had unequal effects on particular subsets of the larger foraging community. The arrival-departure process of bees on flowers and the immigration-extinction process of species on islands are contrasted, and the value of the stochastic, species-level approach to community composition is briefly discussed.
Bet hedging based cooperation can limit kin selection and form a basis for mutualism.
Uitdehaag, Joost C M
2011-07-07
Mutualism is a mechanism of cooperation in which partners that differ help each other. As such, mutualism opposes mechanisms of kin selection and tag-based selection (for example the green beard mechanism), which are based on giving exclusive help to partners that are related or carry the same tag. In contrast to kin selection, which is a basis for parochialism and intergroup warfare, mutualism can therefore be regarded as a mechanism that drives peaceful coexistence between different groups and individuals. Here the competition between mutualism and kin (tag) selection is studied. In a model where kin selection and tag-based selection are dominant, mutualism is promoted by introducing environmental fluctuations. These fluctuations cause reduction in reproductive success by the mechanism of variance discount. The best strategy to counter variance discount is to share with agents who experience the most anticorrelated fluctuations, a strategy called bet hedging. In this way, bet hedging stimulates cooperation with the most unrelated partners, which is a basis for mutualism. Analytic results and simulations reveal that, if this effect is large enough, mutualistic strategies can dominate kin selective strategies. In addition, mutants of these mutualistic strategies that experience fluctuations that are more anticorrelated to their partner, can outcompete wild type, which can lead to the evolution of specialization. In this way, the evolutionary success of mutualistic strategies can be explained by bet hedging-based cooperation. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordström, Jan, E-mail: jan.nordstrom@liu.se; Wahlsten, Markus, E-mail: markus.wahlsten@liu.se
We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for themore » Euler equations.« less
Harvey, Philip D; Khan, Anzalee; Keefe, Richard S E
2017-12-01
Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as "avolition and anhedonia," specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes.
Genetic progress in multistage dairy cattle breeding schemes using genetic markers.
Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P
2005-04-01
The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Denoising of polychromatic CT images based on their own noise properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji Hye; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr
Purpose: Because of high diagnostic accuracy and fast scan time, computed tomography (CT) has been widely used in various clinical applications. Since the CT scan introduces radiation exposure to patients, however, dose reduction has recently been recognized as an important issue in CT imaging. However, low-dose CT causes an increase of noise in the image and thereby deteriorates the accuracy of diagnosis. In this paper, the authors develop an efficient denoising algorithm for low-dose CT images obtained using a polychromatic x-ray source. The algorithm is based on two steps: (i) estimation of space variant noise statistics, which are uniquely determinedmore » according to the system geometry and scanned object, and (ii) subsequent novel conversion of the estimated noise to Gaussian noise so that an existing high performance Gaussian noise filtering algorithm can be directly applied to CT images with non-Gaussian noise. Methods: For efficient polychromatic CT image denoising, the authors first reconstruct an image with the iterative maximum-likelihood polychromatic algorithm for CT to alleviate the beam-hardening problem. We then estimate the space-variant noise variance distribution on the image domain. Since there are many high performance denoising algorithms available for the Gaussian noise, image denoising can become much more efficient if they can be used. Hence, the authors propose a novel conversion scheme to transform the estimated space-variant noise to near Gaussian noise. In the suggested scheme, the authors first convert the image so that its mean and variance can have a linear relationship, and then produce a Gaussian image via variance stabilizing transform. The authors then apply a block matching 4D algorithm that is optimized for noise reduction of the Gaussian image, and reconvert the result to obtain a final denoised image. To examine the performance of the proposed method, an XCAT phantom simulation and a physical phantom experiment were conducted. Results: Both simulation and experimental results show that, unlike the existing denoising algorithms, the proposed algorithm can effectively reduce the noise over the whole region of CT images while preventing degradation of image resolution. Conclusions: To effectively denoise polychromatic low-dose CT images, a novel denoising algorithm is proposed. Because this algorithm is based on the noise statistics of a reconstructed polychromatic CT image, the spatially varying noise on the image is effectively reduced so that the denoised image will have homogeneous quality over the image domain. Through a simulation and a real experiment, it is verified that the proposed algorithm can deliver considerably better performance compared to the existing denoising algorithms.« less
The effect of noise-induced variance on parameter recovery from reaction times.
Vadillo, Miguel A; Garaizar, Pablo
2016-03-31
Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.
Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders
2006-03-13
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
Uechi, Ken; Asakura, Keiko; Masayasu, Shizuko; Sasaki, Satoshi
2017-06-01
Salt intake in Japan remains high; therefore, exploring within-country variation in salt intake and its cause is an important step in the establishment of salt reduction strategies. However, no nationwide evaluation of this variation has been conducted by urinalysis. We aimed to clarify whether within-country variation in salt intake exists in Japan after adjusting for individual characteristics. Healthy men (n=1027) and women (n=1046) aged 20-69 years were recruited from all 47 prefectures of Japan. Twenty-four-hour sodium excretion was estimated using three spot urine samples collected on three nonconsecutive days. The study area was categorized into 12 regions defined by the National Health and Nutrition Survey Japan. Within-country variation in sodium excretion was estimated as a population (region)-level variance using a multilevel model with random intercepts, with adjustment for individual biological, socioeconomic and dietary characteristics. Estimated 24 h sodium excretion was 204.8 mmol per day in men and 155.7 mmol per day in women. Sodium excretion was high in the Northeastern region. However, population-level variance was extremely small after adjusting for individual characteristics (0.8 and 2% of overall variance in men and women, respectively) compared with individual-level variance (99.2 and 98% of overall variance in men and women, respectively). Among individual characteristics, greater body mass index, living with a spouse and high miso-soup intake were associated with high sodium excretion in both sexes. Within-country variation in salt intake in Japan was extremely small compared with individual-level variation. Salt reduction strategies for Japan should be comprehensive and should not address the small within-country differences in intake.
Chapinal, N; de Passillé, A M; Pastell, M; Hänninen, L; Munksgaard, L; Rushen, J
2011-06-01
The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3-dimensional accelerometers, 1 attached to each leg and 1 to the back, and acceleration data were collected while cows walked in a straight line on concrete (experiment 1) or on both concrete and rubber (experiment 2). Cows were video-recorded while walking to assess overall gait, asymmetry of the steps, and walking speed. In experiment 1, cows were selected to maximize the range of gait scores, whereas no clinically lame cows were enrolled in experiment 2. For each accelerometer location, overall acceleration was calculated as the magnitude of the 3-dimensional acceleration vector and the variance of overall acceleration, as well as the asymmetry of variance of acceleration within the front and rear pair of legs. In experiment 1, the asymmetry of variance of acceleration in the front and rear legs was positively correlated with overall gait and the visually assessed asymmetry of the steps (r ≥ 0.6). Walking speed was negatively correlated with the asymmetry of variance of the rear legs (r=-0.8) and positively correlated with the acceleration and the variance of acceleration of each leg and back (r ≥ 0.7). In experiment 2, cows had lower gait scores [2.3 vs. 2.6; standard error of the difference (SED)=0.1, measured on a 5-point scale] and lower scores for asymmetry of the steps (18.0 vs. 23.1; SED=2.2, measured on a continuous 100-unit scale) when they walked on rubber compared with concrete, and their walking speed increased (1.28 vs. 1.22 m/s; SED=0.02). The acceleration of the front (1.67 vs. 1.72 g; SED=0.02) and rear (1.62 vs. 1.67 g; SED=0.02) legs and the variance of acceleration of the rear legs (0.88 vs. 0.94 g; SED=0.03) were lower when cows walked on rubber compared with concrete. Despite the improvements in gait score that occurred when cows walked on rubber, the asymmetry of variance of acceleration of the front leg was higher (15.2 vs. 10.4%; SED=2.0). The difference in walking speed between concrete and rubber correlated with the difference in the mean acceleration and the difference in the variance of acceleration of the legs and back (r ≥ 0.6). Three-dimensional accelerometers seem to be a promising tool for lameness detection on farm and to study walking surfaces, especially when attached to a leg. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Shankarapillai, Rajesh; Nair, Manju Anathakrishnan; George, Roy
2012-01-01
Context: The dental students experience a lot of stress, which increase when they perform their first surgical procedure. Yoga as an anxiolytic tool in anxiety reduction has been practiced over centuries in India. Aim: To assess the efficacy of yoga in reducing the state trait anxiety of dental students before their first periodontal surgery performance. Settings and Design: A randomized controlled study using a two-way split plot design (pre-post-test) was conducted in the department of periodontics, Pacific Dental College, Udaipur, India. Materials and Methods: One hundred clinical dental students who were ready to perform their first periodontal surgery were selected. Students were randomly assigned to two groups and were given a 60-min session on stress reduction. Group A, yogic intervention group, were instructed to do yoga and their performances were monitored for a period of one week and Group B, control group, were given a lecture on stress reduction without any yoga instructions. The investigator who was unaware of the groups had taken the state trait anxiety score of the students three times a) before assigning them to each group, b) prior to the surgical procedure and c) immediately after the performance of surgery. Statistical Analysis Used: Analyses of variance (ANOVA) by SPSS V.16. Results: The statistical results showed a significant reduction in the VAS and state trait anxiety of Group A compared to Group B (ANOVA; P<0.001). Conclusions: This study concludes that Yogic breathing has a significant effect on the reduction of state trait anxiety level of dental students. PMID:22346066
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Simulation of neutron production using MCNPX+MCUNED.
Erhard, M; Sauvan, P; Nolte, R
2014-10-01
In standard MCNPX, the production of neutrons by ions cannot be modelled efficiently. The MCUNED patch applied to MCNPX 2.7.0 allows to model the production of neutrons by light ions down to energies of a few kiloelectron volts. This is crucial for the simulation of neutron reference fields. The influence of target properties, such as the diffusion of reactive isotopes into the target backing or the effect of energy and angular straggling, can be studied efficiently. In this work, MCNPX/MCUNED calculations are compared with results obtained with the TARGET code for simulating neutron production. Furthermore, MCUNED incorporates more effective variance reduction techniques and a coincidence counting tally. This allows the simulation of a TCAP experiment being developed at PTB. In this experiment, 14.7-MeV neutrons will be produced by the reaction T(d,n)(4)He. The neutron fluence is determined by counting alpha particles, independently of the reaction cross section. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Neutron die-away experiment for remote analysis of the surface of the moon and the planets, phase 3
NASA Technical Reports Server (NTRS)
Mills, W. R.; Allen, L. S.
1972-01-01
Continuing work on the two die-away measurements proposed to be made in the combined pulsed neutron experiment (CPNE) for analysis of lunar and planetary surfaces is described. This report documents research done during Phase 3. A general exposition of data analysis by the least-squares method and the related problem of the prediction of variance is given. A data analysis procedure for epithermal die-away data has been formulated. In order to facilitate the analysis, the number of independent material variables has been reduced to two: the hydrogen density and an effective oxygen density, the latter being determined uniquely from the nonhydrogeneous elemental composition. Justification for this reduction in the number of variables is based on a set of 27 new theoretical calculations. Work is described related to experimental calibration of the epithermal die-away measurement. An interim data analysis technique based solely on theoretical calculations seems to be adequate and will be used for future CPNE field tests.
NASA Technical Reports Server (NTRS)
Koster, Randal; Walker, Greg; Mahanama, Sarith; Reichle, Rolf
2012-01-01
Continental-scale offline simulations with a land surface model are used to address two important issues in the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which the downscaling of seasonal precipitation forecasts, if it could be done accurately, would improve streamflow forecasts. The reduction in streamflow forecast skill (with forecasted streamflow measured against observations) associated with adding noise to a soil moisture field is found to be, to first order, proportional to the average reduction in the accuracy of the soil moisture field itself. This result has implications for streamflow forecast improvement under satellite-based soil moisture measurement programs. In the second and more idealized ("perfect model") analysis, precipitation downscaling is found to have an impact on large-scale streamflow forecasts only if two conditions are met: (i) evaporation variance is significant relative to the precipitation variance, and (ii) the subgrid spatial variance of precipitation is adequately large. In the large-scale continental region studied (the conterminous United States), these two conditions are met in only a somewhat limited area.
Key factors in children's competence to consent to clinical research.
Hein, Irma M; Troost, Pieter W; Lindeboom, Robert; Benninga, Marc A; Zwaan, C Michel; van Goudoever, Johannes B; Lindauer, Ramón J L
2015-10-24
Although law is established on a strong presumption that persons younger than a certain age are not competent to consent, statutory age limits for asking children's consent to clinical research differ widely internationally. From a clinical perspective, competence is assumed to involve many factors including the developmental stage, the influence of parents and peers, and life experience. We examined potential determining factors for children's competence to consent to clinical research and to what extent they explain the variation in competence judgments. From January 1, 2012 through January 1, 2014, pediatric patients aged 6 to 18 years, eligible for clinical research studies were enrolled prospectively at various in- and outpatient pediatric departments. Children's competence to consent was assessed by MacArthur Competence Assessment Tool for Clinical Research. Potential determining child variables included age, gender, intelligence, disease experience, ethnicity and socio-economic status (SES). We used logistic regression analysis and change in explained variance in competence judgments to quantify the contribution of a child variable to the total explained variance. Contextual factors included risk and complexity of the decision to participate, parental competence judgment and the child's or parents decision to participate. Out of 209 eligible patients, 161 were included (mean age, 10.6 years, 47.2 % male). Age, SES, intelligence, ethnicity, complexity, parental competence judgment and trial participation were univariately associated with competence (P < 0.05). Total explained variance in competence judgments was 71.5 %. Only age and intelligence significantly and independently explained the variance in competence judgments, explaining 56.6 % and 12.7 % of the total variance respectively. SES, male gender, disease experience and ethnicity each explained less than 1 % of the variance in competence judgments. Contextual factors together explained an extra 2.8 % (P > 0.05). Age is the factor that explaines most of to the variance in children's competence to consent, followed by intelligence. Experience with disease did not affect competence in this study, nor did other variables. Development and use of a standardized instrument for assessing children's competence to consent in drug trials: Are legally established age limits valid?, NTR3918.
Effective dimension reduction for sparse functional data
YAO, F.; LEI, E.; WU, Y.
2015-01-01
Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293
Jackknife for Variance Analysis of Multifactor Experiments.
1982-05-01
variance-covariance matrix is generated y a subroutine named CORAN (UNIVAC, 1969). The jackknife variances are then punched on computer cards in the same...LEVEL OF: InMte CALL cORAN (oaILa.NSUR.NOAY.D,*OXflRRORR.PCOF.2K.1’)I WRITE IP97111 )1RRN.4 .1:NDAY) 0 a 3fill1UR I .’t UN 001f’..1uŔ:1 .w100710n
Henriques-Calado, Joana; Duarte-Silva, Maria Eugénia; Campos, Rui C; Sacoto, Carlota; Keong, Ana Marta; Junqueira, Diana
2013-01-01
As part of the research relating personality and depression, this study seeks to predict depressive experiences in aging women according to Sidney Blatt's perspective based on the Five-Factor Model of Personality. The NEO-Five Factor Inventory and the Depressive Experiences Questionnaire were administered. The domains Neuroticism, Agreeableness, and Conscientiousness predicted self-criticism, explaining 68% of the variance; the domains Neuroticism and Extraversion predicted dependency, explaining 62% of the variance. The subfactors Neediness and Connectedness were differently related to personality traits. These findings are relevant to the research relating personality and anaclitic / introjective depressive experiences in late adulthood.
Massah, Omid; Sohrabi, Faramarz; A’azami, Yousef; Doostian, Younes; Farhoudian, Ali; Daneshmand, Reza
2016-01-01
Background Emotion plays an important role in adapting to life changes and stressful events. Difficulty regulating emotions is one of the problems drug abusers often face, and teaching these individuals to express and manage their emotions can be effective on improving their difficult circumstances. Objectives The present study aimed to determine the effectiveness of the Gross model-based emotion regulation strategies training on anger reduction in drug-dependent individuals. Patients and Methods The present study had a quasi-experimental design wherein pretest-posttest evaluations were applied using a control group. The population under study included addicts attending Marivan’s methadone maintenance therapy centers in 2012 - 2013. Convenience sampling was used to select 30 substance-dependent individuals undergoing maintenance treatment who were then randomly assigned to the experiment and control groups. The experiment group received its training in eight two-hour sessions. Data were analyzed using analysis of co-variance and paired t-test. Results There was significant reduction in anger symptoms of drug-dependent individuals after gross model based emotion regulation training (ERT) (P < 0.001). Moreover, the effectiveness of the training on anger was persistent in the follow-up period. Conclusions Symptoms of anger in drug-dependent individuals of this study were reduced by gross model-based emotion regulation strategies training. Based on the results of this study, we may conclude that the gross model based emotion regulation strategies training can be applied alongside other therapies to treat drug abusers undergoing rehabilitation. PMID:27162759
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Guzick, Andrew G.; McNamara, Joseph P.H.; Reid, Adam M.; Balkhi, Amanda M.; Storch, Eric A.; Murphy, Tanya K.; Goodman, Wayne K.; Bussing, Regina; Geffken, Gary R.
2017-01-01
Attention-deficit/hyperactivity disorder (ADHD) has been found to be highly comorbid in children and adolescents with obsessive-compulsive disorder (OCD). Some have proposed, however, that obsessive anxiety may cause inattention and executive dysfunction, leading to inappropriate ADHD diagnoses in those with OCD. If this were the case, these symptoms would be expected to decrease following successful OCD treatment. The present study tested this hypothesis and evaluated whether ADHD symptoms at baseline predicted OCD treatment response. Obsessive-compulsive and ADHD symptoms were assessed in 50 youth enrolled in a randomized controlled trial investigating selective serotonin reuptake inhibitor and cognitive behavioral treatment. Repeated-measures analysis of variance (RMANOVA) revealed that ADHD symptoms at baseline do not significantly predict treatment outcome. A multivariate RMANOVA found that OCD treatment response moderated change in inattention; participants who showed greater reduction in OCD severity experienced greater reduction in ADHD-inattentive symptoms, while those with less substantial reduction in obsessions and compulsions showed less change. These findings suggest that children and adolescents with OCD and inattention may experience meaningful improvements in attention problems following OCD treatment. Thus, in many youth with OCD, inattention may be inherently tied to obsessions and compulsions. Clinicians may consider addressing OCD in treatment before targeting inattentive-type ADHD. PMID:28966908
Sheng, Zheya; Pettersson, Mats E; Honaker, Christa F; Siegel, Paul B; Carlborg, Örjan
2015-10-01
Artificial selection provides a powerful approach to study the genetics of adaptation. Using selective-sweep mapping, it is possible to identify genomic regions where allele-frequencies have diverged during selection. To avoid false positive signatures of selection, it is necessary to show that a sweep affects a selected trait before it can be considered adaptive. Here, we confirm candidate, genome-wide distributed selective sweeps originating from the standing genetic variation in a long-term selection experiment on high and low body weight of chickens. Using an intercross between the two divergent chicken lines, 16 adaptive selective sweeps were confirmed based on their association with the body weight at 56 days of age. Although individual additive effects were small, the fixation for alternative alleles across the loci contributed at least 40 % of the phenotypic difference for the selected trait between these lines. The sweeps contributed about half of the additive genetic variance present within and between the lines after 40 generations of selection, corresponding to a considerable portion of the additive genetic variance of the base population. Long-term, single-trait, bi-directional selection in the Virginia chicken lines has resulted in a gradual response to selection for extreme phenotypes without a drastic reduction in the genetic variation. We find that fixation of several standing genetic variants across a highly polygenic genetic architecture made a considerable contribution to long-term selection response. This provides new fundamental insights into the dynamics of standing genetic variation during long-term selection and adaptation.
Wood, Jacquelyn L A; Yates, Matthew C; Fraser, Dylan J
2016-06-01
It is widely thought that small populations should have less additive genetic variance and respond less efficiently to natural selection than large populations. Across taxa, we meta-analytically quantified the relationship between adult census population size (N) and additive genetic variance (proxy: h (2)) and found no reduction in h (2) with decreasing N; surveyed populations ranged from four to one million individuals (1735 h (2) estimates, 146 populations, 83 species). In terms of adaptation, ecological conditions may systematically differ between populations of varying N; the magnitude of selection these populations experience may therefore also differ. We thus also meta-analytically tested whether selection changes with N and found little evidence for systematic differences in the strength, direction or form of selection with N across different trait types and taxa (7344 selection estimates, 172 populations, 80 species). Collectively, our results (i) indirectly suggest that genetic drift neither overwhelms selection more in small than in large natural populations, nor weakens adaptive potential/h (2) in small populations, and (ii) imply that natural populations of varying sizes experience a variety of environmental conditions, without consistently differing habitat quality at small N. However, we caution that the data are currently insufficient to determine whether some small populations may retain adaptive potential definitively. Further study is required into (i) selection and genetic variation in completely isolated populations of known N, under-represented taxonomic groups, and nongeneralist species, (ii) adaptive potential using multidimensional approaches and (iii) the nature of selective pressures for specific traits.
An Evolutionary Perspective on Epistasis and the Missing Heritability
Hemani, Gibran; Knott, Sara; Haley, Chris
2013-01-01
The relative importance between additive and non-additive genetic variance has been widely argued in quantitative genetics. By approaching this question from an evolutionary perspective we show that, while additive variance can be maintained under selection at a low level for some patterns of epistasis, the majority of the genetic variance that will persist is actually non-additive. We propose that one reason that the problem of the “missing heritability” arises is because the additive genetic variation that is estimated to be contributing to the variance of a trait will most likely be an artefact of the non-additive variance that can be maintained over evolutionary time. In addition, it can be shown that even a small reduction in linkage disequilibrium between causal variants and observed SNPs rapidly erodes estimates of epistatic variance, leading to an inflation in the perceived importance of additive effects. We demonstrate that the perception of independent additive effects comprising the majority of the genetic architecture of complex traits is biased upwards and that the search for causal variants in complex traits under selection is potentially underpowered by parameterising for additive effects alone. Given dense SNP panels the detection of causal variants through genome-wide association studies may be improved by searching for epistatic effects explicitly. PMID:23509438
Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, T.E.
1996-01-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less
Symmetry-Based Variance Reduction Applied to 60Co Teletherapy Unit Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Sheikh-Bagheri, D.
A new variance reduction technique (VRT) is implemented in the BEAM code [1] to specifically improve the efficiency of calculating penumbral distributions of in-air fluence profiles calculated for isotopic sources. The simulations focus on 60Co teletherapy units. The VRT includes splitting of photons exiting the source capsule of a 60Co teletherapy source according to a splitting recipe and distributing the split photons randomly on the periphery of a circle, preserving the direction cosine along the beam axis, in addition to the energy of the photon. It is shown that the use of the VRT developed in this work can lead to a 6-9 fold improvement in the efficiency of the penumbral photon fluence of a 60Co beam compared to that calculated using the standard optimized BEAM code [1] (i.e., one with the proper selection of electron transport parameters).
Afrisham, Reza; Sadegh-Nejadi, Sahar; SoliemaniFar, Omid; Kooti, Wesam; Ashtary-Larky, Damoon; Alamiri, Fatima; Najjar-Asl, Sedigheh; Khaneh-Keshi, Ali
2016-01-01
Objective The purpose of this study was to evaluate the salivary testosterone levels under psychological stress and its relationship with rumination and five personality traits in medical students. Methods A total of 58 medical students, who wanted to participate in the final exam, were selected by simple random sampling. Two months before the exam, in the basal conditions, the NEO Inventory short form, and the Emotional Control Questionnaire (ECQ) were completed. Saliva samples were taken from students in both the basal conditions and under exam stress. Salivary testosterone was measured by ELISA. Data was analyzed using multivariate analysis of variance with repeated measures, paired samples t-test, Pearson correlation and stepwise regression analysis. Results Salivary testosterone level of men showed a significant increase under exam stress (p<0.05). However, a non-significant although substantial reduction observed in women. A significant correlation was found between extroversion (r=-0.33) and openness to experience (r=0.30) with salivary testosterone (p<0.05). Extraversion, aggression control and emotional inhibition predicted 28% of variance of salivary testosterone under stress. Conclusion Salivary testosterone reactivity to stress can be determined by sexual differences, personality traits, and emotional control variables which may decrease or increase stress effects on biological responses, especially the salivary testosterone. PMID:27909455
Variation of gene expression in Bacillus subtilis samples of fermentation replicates.
Zhou, Ying; Yu, Wen-Bang; Ye, Bang-Ce
2011-06-01
The application of comprehensive gene expression profiling technologies to compare wild and mutated microorganism samples or to assess molecular differences between various treatments has been widely used. However, little is known about the normal variation of gene expression in microorganisms. In this study, an Agilent customized microarray representing 4,106 genes was used to quantify transcript levels of five-repeated flasks to assess normal variation in Bacillus subtilis gene expression. CV analysis and analysis of variance were employed to investigate the normal variance of genes and the components of variance, respectively. The results showed that above 80% of the total variation was caused by biological variance. For the 12 replicates, 451 of 4,106 genes exhibited variance with CV values over 10%. The functional category enrichment analysis demonstrated that these variable genes were mainly involved in cell type differentiation, cell type localization, cell cycle and DNA processing, and spore or cyst coat. Using power analysis, the minimal biological replicate number for a B. subtilis microarray experiment was determined to be six. The results contribute to the definition of the baseline level of variability in B. subtilis gene expression and emphasize the importance of replicate microarray experiments.
Increasing Deception Detection Accuracy with Strategic Questioning
ERIC Educational Resources Information Center
Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.
2010-01-01
One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Pozhitkov, Alex E; Noble, Peter A; Bryk, Jarosław; Tautz, Diethard
2014-01-01
Although microarrays are analysis tools in biomedical research, they are known to yield noisy output that usually requires experimental confirmation. To tackle this problem, many studies have developed rules for optimizing probe design and devised complex statistical tools to analyze the output. However, less emphasis has been placed on systematically identifying the noise component as part of the experimental procedure. One source of noise is the variance in probe binding, which can be assessed by replicating array probes. The second source is poor probe performance, which can be assessed by calibrating the array based on a dilution series of target molecules. Using model experiments for copy number variation and gene expression measurements, we investigate here a revised design for microarray experiments that addresses both of these sources of variance. Two custom arrays were used to evaluate the revised design: one based on 25 mer probes from an Affymetrix design and the other based on 60 mer probes from an Agilent design. To assess experimental variance in probe binding, all probes were replicated ten times. To assess probe performance, the probes were calibrated using a dilution series of target molecules and the signal response was fitted to an adsorption model. We found that significant variance of the signal could be controlled by averaging across probes and removing probes that are nonresponsive or poorly responsive in the calibration experiment. Taking this into account, one can obtain a more reliable signal with the added option of obtaining absolute rather than relative measurements. The assessment of technical variance within the experiments, combined with the calibration of probes allows to remove poorly responding probes and yields more reliable signals for the remaining ones. Once an array is properly calibrated, absolute quantification of signals becomes straight forward, alleviating the need for normalization and reference hybridizations.
Lowthian, P; Disler, P; Ma, S; Eagar, K; Green, J; de Graaff, S
2000-10-01
To investigate whether the Australian National Sub-acute and Non-acute Patient Casemix Classification (SNAP) and Functional Independence Measure and Functional Related Group (Version 2) (FIM-FRG2) casemix systems can be used to predict functional outcome, and reduce the variance of length of stay (LOS) of patients undergoing rehabilitation after strokes. The study comprised a retrospective analysis of the records of patients admitted to the Cedar Court Healthsouth Rehabilitation Hospital for rehabilitation after stroke. The sample included 547 patients (83.3% of those admitted with stroke during this period). Patient data were stratified for analysis into the five SNAP or nine FIM-FRG2 groups, on the basis of the admission FIM scores and age. The AN-SNAP classification accounted for a 30.7% reduction of the variance of LOS, and 44.2% of motor FIM, and the FIM-FRG2 accounts for 33.5% and 56.4% reduction respectively. Comparison of the Cedar Court with the national AN-SNAP data showed differences in the LOS and functional outcomes of older, severely disabled patients. Intensive rehabilitation in selected patients of this type appears to have positive effects, albeit with a slightly longer period of inpatient rehabilitation. Casemix classifications can be powerful management tools. Although FIM-FRG2 accounts for more reduction in variance than SNAP, division into nine groups meant that some contained few subjects. This paper supports the introduction of AN-SNAP as the standard casemix tool for rehabilitation in Australia, which will hopefully lead to rational, adequate funding of the rehabilitation phase of care.
Relationship between extrinsic factors and the acromio-humeral distance.
Mackenzie, Tanya Anne; Herrington, Lee; Funk, Lenard; Horsley, Ian; Cools, Ann
2016-06-01
Maintenance of the subacromial space is important in impingement syndromes. Research exploring the correlation between biomechanical factors and the subacromial space would be beneficial. To establish if relationship exists between the independent variables of scapular rotation, shoulder internal rotation, shoulder external rotation, total arc of shoulder rotation, pectoralis minor length, thoracic curve, and shoulder activity level with the dependant variables: AHD in neutral, AHD in 60° arm abduction, and percentage reduction in AHD. Controlled laboratory study. Data from 72 male control shoulders (24.28years STD 6.81 years) and 186 elite sportsmen's shoulders (25.19 STD 5.17 years) were included in the analysis. The independent variables were quantified and real time ultrasound was used to measure the dependant variable acromio-humeral distance. Shoulder internal rotation and pectoralis minor length, explained 8% and 6% respectively of variance in acromio-humeral distance in neutral. Pectoralis minor length accounted for 4% of variance in 60° arm abduction. Total arc of rotation, shoulder external rotation range, and shoulder activity levels explained 9%, 15%, and 16%-29% of variance respectively in percentage reduction in acromio-humeral distance during arm abduction to 60°. Pectorals minor length, shoulder rotation ranges, total arc of shoulder rotation, and shoulder activity levels were found to have weak to moderate relationships with acromio-humeral distance. Existence and strength of relationship was population specific and dependent on arm position. Relationships only accounted for small variances in AHD indicating that in addition to these factors there are other factors involved in determining AHD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
The key kinematic determinants of undulatory underwater swimming at maximal velocity.
Connaboy, Chris; Naemi, Roozbeh; Brown, Susan; Psycharakis, Stelios; McCabe, Carla; Coleman, Simon; Sanders, Ross
2016-01-01
The optimisation of undulatory underwater swimming is highly important in competitive swimming performance. Nineteen kinematic variables were identified from previous research undertaken to assess undulatory underwater swimming performance. The purpose of the present study was to determine which kinematic variables were key to the production of maximal undulatory underwater swimming velocity. Kinematic data at maximal undulatory underwater swimming velocity were collected from 17 skilled swimmers. A series of separate backward-elimination analysis of covariance models was produced with cycle frequency and cycle length as dependent variables (DVs) and participant as a fixed factor, as including cycle frequency and cycle length would explain 100% of the maximal swimming velocity variance. The covariates identified in the cycle-frequency and cycle-length models were used to form the saturated model for maximal swimming velocity. The final parsimonious model identified three covariates (maximal knee joint angular velocity, maximal ankle angular velocity and knee range of movement) as determinants of the variance in maximal swimming velocity (adjusted-r2 = 0.929). However, when participant was removed as a fixed factor there was a large reduction in explained variance (adjusted r2 = 0.397) and only maximal knee joint angular velocity continued to contribute significantly, highlighting its importance to the production of maximal swimming velocity. The reduction in explained variance suggests an emphasis on inter-individual differences in undulatory underwater swimming technique and/or anthropometry. Future research should examine the efficacy of other anthropometric, kinematic and coordination variables to better understand the production of maximal swimming velocity and consider the importance of individual undulatory underwater swimming techniques when interpreting the data.
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Held, Philip; Owens, Gina P; Monroe, J Richard; Chard, Kathleen M
2017-08-01
The present study examined the predictive role of increased self-reported mindfulness skills on reduced trauma-related guilt in a sample of veterans over the course of residential treatment for posttraumatic stress disorder (PTSD; N = 128). The residential treatment consisted of seven weeks of intensive cognitive processing therapy (CPT) for PTSD, as well as additional psychoeducational groups, including seven sessions on mindfulness skills. Increased mindfulness skills describing, acting with awareness, and accepting without judgment were significantly associated with reductions in trauma-related guilt over the course of treatment. Increases in the ability to act with awareness and accept without judgment were significantly associated with reductions in global guilt, R 2 = .26, guilt distress, R 2 = .23, guilt cognitions, R 2 = .23, and lack of justification, R 2 = .11. An increase in the ability to accept without judgment was the only self-reported mindfulness skill that was associated with reductions in hindsight bias, β = -.34 and wrongdoing, β = -.44. Increases in self-reported mindfulness skills explained 15.1 to 24.1% of the variance in reductions in trauma-related guilt, suggesting that mindfulness skills may play a key role in reducing the experience of trauma-related guilt during psychotherapy. Our results provide preliminary support for the use of mindfulness groups as an adjunct to traditional evidence-based treatments aimed at reducing trauma-related guilt, though this claim needs to be tested further using experimental designs. Copyright © 2017 International Society for Traumatic Stress Studies.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Pupil Control Ideology and the Salience of Teacher Characteristics
ERIC Educational Resources Information Center
Smyth, W. J.
1977-01-01
The explanatory power of the combined biographical variables of teacher age, experience, sex, organizational status, and academic qualifications for variances in pupil control ideology (PCI) is seriously questioned, since as little as 6 percent of PCI variance may be explained by reference to these particular variables. (Author)
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…
48 CFR 9904.407-50 - Techniques for application.
Code of Federal Regulations, 2010 CFR
2010-10-01
... engineering studies, experience, or other supporting data) used in setting and revising standards; the period... their related variances may be recognized either at the time purchases of material are entered into the...-price standards are used and related variances are recognized at the time purchases of material are...
Areal Control Using Generalized Least Squares As An Alternative to Stratification
Raymond L. Czaplewski
2001-01-01
Stratification for both variance reduction and areal control proliferates the number of strata, which causes small sample sizes in many strata. This might compromise statistical efficiency. Generalized least squares can, in principle, replace stratification for areal control.
Aerobic fitness, maturation, and training experience in youth basketball.
Carvalho, Humberto M; Coelho-e-Silva, Manuel J; Eisenmann, Joey C; Malina, Robert M
2013-07-01
Relationships among chronological age (CA), maturation, training experience, and body dimensions with peak oxygen uptake (VO2max) were considered in male basketball players 14-16 y of age. Data for all players included maturity status estimated as percentage of predicted adult height attained at the time of the study (Khamis-Roche protocol), years of training, body dimensions, and VO2max (incremental maximal test on a treadmill). Proportional allometric models derived from stepwise regressions were used to incorporate either CA or maturity status and to incorporate years of formal training in basketball. Estimates for size exponents (95% CI) from the separate allometric models for VO2max were height 2.16 (1.23-3.09), body mass 0.65 (0.37-0.93), and fat-free mass 0.73 (0.46-1.02). Body dimensions explained 39% to 44% of variance. The independent variables in the proportional allometric models explained 47% to 60% of variance in VO2max. Estimated maturity status (11-16% of explained variance) and training experience (7-11% of explained variance) were significant predictors with either body mass or estimated fat-free mass (P ≤ .01) but not with height. Biological maturity status and training experience in basketball had a significant contribution to VO2max via body mass and fat-free fat mass and also had an independent positive relation with aerobic performance. The results highlight the importance of considering variation associated with biological maturation in aerobic performance of late-adolescent boys.
Fractal structures and fractal functions as disease indicators
Escos, J.M; Alados, C.L.; Emlen, J.M.
1995-01-01
Developmental instability is an early indicator of stress, and has been used to monitor the impacts of human disturbance on natural ecosystems. Here we investigate the use of different measures of developmental instability on two species, green peppers (Capsicum annuum), a plant, and Spanish ibex (Capra pyrenaica), an animal. For green peppers we compared the variance in allometric relationship between control plants, and a treatment group infected with the tomato spotted wilt virus. The results show that infected plants have a greater variance about the allometric regression line than the control plants. We also observed a reduction in complexity of branch structure in green pepper with a viral infection. Box-counting fractal dimension of branch architecture declined under stress infection. We also tested the reduction in complexity of behavioral patterns under stress situations in Spanish ibex (Capra pyrenaica). Fractal dimension of head-lift frequency distribution measures predator detection efficiency. This dimension decreased under stressful conditions, such as advanced pregnancy and parasitic infection. Feeding distribution activities reflect food searching efficiency. Power spectral analysis proves to be the most powerful tool for character- izing fractal behavior, revealing a reduction in complexity of time distribution activity under parasitic infection.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Keyworth, C; Nelson, P A; Bundy, C; Pye, S R; Griffiths, C E M; Cordingley, L
2018-08-01
Message framing is important in health communication research to encourage behaviour change. Psoriasis, a long-term inflammatory skin condition, has additional comorbidities including high levels of anxiety and cardiovascular disease (CVD), making message framing particularly important. This experimental study aimed to: (1) identify whether health messages about psoriasis presented as either gain- or loss-framed were more effective for prompting changes in behavioural intentions (BI), (2) examine whether BI were driven by a desire to improve psoriasis or reduce CVD risk; (3) examine emotional reactions to message frame; and (4) examine predictors of BI. A two by two experiment examined the effects on BI of message frame (loss vs. gain) and message focus (psoriasis symptom reduction vs. CVD risk reduction). Participants with psoriasis (n = 217) were randomly allocated to one of four evidence-based health messages related to either smoking, alcohol, diet or physical activity, using an online questionnaire. BI was the primary outcome. Analysis of variance tests and hierarchical multiple regression analyses were conducted. A significant frame by focus interaction was found for BI to reduce alcohol intake (p = .023); loss-framed messages were more effective for CVD risk reduction information, whilst gain-framed messages were more effective for psoriasis symptom reduction information. Message framing effects were not found for BI for increased physical activity and improving diet. High CVD risk was a significant predictor of increased BI for both alcohol reduction (β = .290, p < .01) and increased physical activity (β = -.231, p < .001). Message framing may be an important factor to consider depending on the health benefit emphasised (disease symptom reduction or CVD risk reduction) and patient-stated priorities. Condition-specific health messages in psoriasis populations may increase the likelihood of message effectiveness for alcohol reduction.
Exact statistical results for binary mixing and reaction in variable density turbulence
NASA Astrophysics Data System (ADS)
Ristorcelli, J. R.
2017-02-01
We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ 2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ 2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived analytic results relating several other second and third order moments and see coupling between odd and even order moments demonstrating a natural and inherent skewness in the mixing in variable density turbulence. The analytic results have applications in the areas of isothermal material mixing, isobaric thermal mixing, and simple chemical reaction (in progress variable formulation).
Umegaki, Hiroyuki; Yanagawa, Madoka; Nonogaki, Zen; Nakashima, Hirotaka; Kuzuya, Masafumi; Endo, Hidetoshi
2014-01-01
We surveyed the care burden of family caregivers, their satisfaction with the services, and whether their care burden was reduced by the introduction of the LTCI care services. We randomly enrolled 3000 of 43,250 residents of Nagoya City aged 65 and over who had been certified as requiring long-term care and who used at least one type of service provided by the public LTCI; 1835 (61.2%) subjects returned the survey. A total of 1015 subjects for whom complete sets of data were available were employed for statistical analysis. Analysis of variance for the continuous variables and χ(2) analysis for that categorical variance were performed. Multiple logistic analysis was performed with the factors with p values of <0.2 in the χ(2) analysis of burden reduction. A total of 68.8% of the caregivers indicated that the care burden was reduced by the introduction of the LTCI care services, and 86.8% of the caregivers were satisfied with the LTCI care services. A lower age of caregivers, a more advanced need classification level, and more satisfaction with the services were independently associated with a reduction of the care burden. In Japanese LTCI, the overall satisfaction of the caregivers appears to be relatively high and is associated with the reduction of the care burden. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Martin, Andrew J.
2014-01-01
Students with attention-deficit/hyperactivity disorder (ADHD) experience significant academic difficulties that can lead to numerous negative academic consequences. With a focus on adverse academic outcomes, this study seeks to disentangle variance attributable to ADHD from variance attributable to salient personal and contextual covariates.…
NASA Technical Reports Server (NTRS)
Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.
2012-01-01
This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.
Woodbury, Allan D.; Rubin, Yoram
2000-01-01
A method for inverting the travel time moments of solutes in heterogeneous aquifers is presented and is based on peak concentration arrival times as measured at various samplers in an aquifer. The approach combines a Lagrangian [Rubin and Dagan, 1992] solute transport framework with full‐Bayesian hydrogeological parameter inference. In the full‐Bayesian approach the noise values in the observed data are treated as hyperparameters, and their effects are removed by marginalization. The prior probability density functions (pdfs) for the model parameters (horizontal integral scale, velocity, and log K variance) and noise values are represented by prior pdfs developed from minimum relative entropy considerations. Analysis of the Cape Cod (Massachusetts) field experiment is presented. Inverse results for the hydraulic parameters indicate an expected value for the velocity, variance of log hydraulic conductivity, and horizontal integral scale of 0.42 m/d, 0.26, and 3.0 m, respectively. While these results are consistent with various direct‐field determinations, the importance of the findings is in the reduction of confidence range about the various expected values. On selected control planes we compare observed travel time frequency histograms with the theoretical pdf, conditioned on the observed travel time moments. We observe a positive skew in the travel time pdf which tends to decrease as the travel time distance grows. We also test the hypothesis that there is no scale dependence of the integral scale λ with the scale of the experiment at Cape Cod. We adopt two strategies. The first strategy is to use subsets of the full data set and then to see if the resulting parameter fits are different as we use different data from control planes at expanding distances from the source. The second approach is from the viewpoint of entropy concentration. No increase in integral scale with distance is inferred from either approach over the range of the Cape Cod tracer experiment.
Workflow for Criticality Assessment Applied in Biopharmaceutical Process Validation Stage 1.
Zahel, Thomas; Marschall, Lukas; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Mueller, Eric M; Murphy, Patrick; Natschläger, Thomas; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph
2017-10-12
Identification of critical process parameters that impact product quality is a central task during regulatory requested process validation. Commonly, this is done via design of experiments and identification of parameters significantly impacting product quality (rejection of the null hypothesis that the effect equals 0). However, parameters which show a large uncertainty and might result in an undesirable product quality limit critical to the product, may be missed. This might occur during the evaluation of experiments since residual/un-modelled variance in the experiments is larger than expected a priori. Estimation of such a risk is the task of the presented novel retrospective power analysis permutation test. This is evaluated using a data set for two unit operations established during characterization of a biopharmaceutical process in industry. The results show that, for one unit operation, the observed variance in the experiments is much larger than expected a priori, resulting in low power levels for all non-significant parameters. Moreover, we present a workflow of how to mitigate the risk associated with overlooked parameter effects. This enables a statistically sound identification of critical process parameters. The developed workflow will substantially support industry in delivering constant product quality, reduce process variance and increase patient safety.
Yandigeri, Mahesh S; Malviya, Nityanand; Solanki, Manoj Kumar; Shrivastava, Pooja; Sivakumar, G
2015-08-01
A chitinolytic actinomycete Streptomyces vinaceusdrappus S5MW2 was isolated from water sample of Chilika lake, India and identified using 16S rRNA gene sequencing. It showed in vitro antifungal activity against the sclerotia producing pathogen Rhizoctonia solani in a dual culture assay and by chitinase enzyme production in a chitin supplemented minimal broth. Moreover, isolate S5MW2 was further characterized for biocontrol (BC) and plant growth promoting features in a greenhouse experiment with or without colloidal chitin (CC). Results of greenhouse experiment showed that CC supplementation with S5MW2 showed a significant growth of tomato plants and superior disease reduction as compared to untreated control and without CC treated plants. Moreover, higher accumulation of chitinase also recovered in the CC supplemented plants. Significant effect of CC also concurred with the Analysis of Variance of greenhouse parameters. These results show that the a marine antagonist S5MW2 has BC efficiency against R. solani and chitinase enzyme played important role in plant resistance.
Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.
2010-01-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273
Zimmerman, John E.; Chan, May T.; Jackson, Nicholas; Maislin, Greg; Pack, Allan I.
2012-01-01
Study Objectives: To determine the effect of different genetic backgrounds on demographic and environmental interventions that affect sleep and evaluate variance of these measures; and to evaluate sleep and variance of sleep behaviors in 6 divergent laboratory strains of common origin. Design: Assessment of the effects of age, sex, mating status, food sources, and social experience using video analysis of sleep behavior in 2 different strains of Drosophila, white1118ex (w1118ex) and white Canton-S (wCS10). Sleep was also determined for 6 laboratory strains of Canton-S and 3 inbred lines. The variance of total sleep was determined for all groups and conditions. Measurements and Results: The circadian periods and the effects of age upon sleep were the same between w1118ex and wCS10 strains. However, the w1118ex and wCS10 strains demonstrated genotype-dependent differences in the effects upon sleep of sex, mating status, social experience, and being on different foods. Variance of total sleep was found to differ in a genotype dependent manner for interventions between the w1118ex and wCS10 strains. Six different laboratory Canton-S strains were found to have significantly different circadian periods (P < 0.001) and sleep phenotypes (P < 0.001). Three inbred lines showed reduced variance for sleep measurements. Conclusions: One must control environmental conditions in a rigorously consistent manner to ensure that sleep data may be compared between experiments. Genetic background has a significant impact upon changes in sleep behavior and variance of behavior due to demographic factors and environmental interventions. This represents an opportunity to discover new genes that modify sleep/wake behavior. Citation: Zimmerman JE; Chan MT; Jackson N; Maislin G; Pack AI. Genetic background has a major impact on differences in sleep resulting from environmental influences in Drosophila. SLEEP 2012;35(4):545-557. PMID:22467993
Sakamoto, Sadanori; Iguchi, Masaki
2018-06-08
Less attention to a balance task reduces the center of foot pressure (COP) variability by automating the task. However, it is not fully understood how the degree of postural automaticity influences the voluntary movement and anticipatory postural adjustments. Eleven healthy young adults performed a bipedal, eyes closed standing task under the three conditions: Control (C, standing task), Single (S, standing + reaction tasks), and Dual (D, standing + reaction + mental tasks). The reaction task was flexing the right shoulder to an auditory stimulus, which causes counter-clockwise rotational torque, and the mental task was arithmetic task. The COP variance before the reaction task was reduced in the D condition compared to that in the C and S conditions. On average the onsets of the arm movement and the vertical torque (Tz, anticipatory clockwise rotational torque) were both delayed, and the maximal Tz slope (the rate at which the torque develops) became less steep in the D condition compared to those in the S condition. When these data in the D condition were expressed as a percentage of those in the S condition, the arm movement onset and the Tz slope were positively and negatively, respectively, correlated with the COP variance. By using the mental-task induced COP variance reduction as the indicator of postural automaticity, our data suggest that the balance task for those with more COP variance reduction is less cognitively demanding, leading to the shorter reaction time probably due to the attention shift from the automated balance task to the reaction task. Copyright © 2018 Elsevier B.V. All rights reserved.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
The Pricing of European Options Under the Constant Elasticity of Variance with Stochastic Volatility
NASA Astrophysics Data System (ADS)
Bock, Bounghun; Choi, Sun-Yong; Kim, Jeong-Hoon
This paper considers a hybrid risky asset price model given by a constant elasticity of variance multiplied by a stochastic volatility factor. A multiscale analysis leads to an asymptotic pricing formula for both European vanilla option and a Barrier option near the zero elasticity of variance. The accuracy of the approximation is provided in a rigorous manner. A numerical experiment for implied volatilities shows that the hybrid model improves some of the well-known models in view of fitting the data for different maturities.
Experience with botulinum toxin in chronic migraine.
Castrillo Sanz, A; Morollón Sánchez-Mateos, N; Simonet Hernández, C; Fernández Rodríguez, B; Cerdán Santacruz, D; Mendoza Rodríguez, A; Rodríguez Sanz, M F; Tabernero García, C; Guerrero Becerra, P; Ferrero Ros, M; Duate García-Luis, J
2016-10-21
The purposes of this study were to describe our 16-month experience with onabotulinumtoxinA (OnabotA) for the treatment of chronic migraine (CM) in the Spanish province of Segovia, evaluate its benefits, and determine clinical markers of good response to treatment. Prospective study of patients with CM who received OnabotA for 16 months. The effectiveness of OnabotA was evaluated based on the reduction in the number of headache days, pain intensity, and side effects. We used two-way analysis of variance (ANOVA) to assess the effects of treatment according to the time factor. We studied the correlation between treatment effects and other variables using a linear regression model to establish the clinical markers of good response to treatment. We included 69 patients who met the diagnostic criteria for CM. Patients underwent an average of 2 infiltrations. Mean age was 43 years; 88.4% were women. The number of headache days and pain intensity decreased significantly (P < .005); improvements remained over time. We found a negative correlation between the reduction in pain intensity and the number of treatments before OnabotA. The beneficial effects of OnabotA for CM continue over time. OnabotA is a safe and well-tolerated treatment whose use for refractory CM should not be delayed since early treatment provides greater benefits. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Hersoug, Anne Grete
2004-12-01
My first focus of this study was to explore therapists' personal characteristics as predictors of the proportion of interpretation in brief dynamic psychotherapy (N=39; maximum 40 sessions). In this study, I used data from the Norwegian Multicenter Study on Process and Outcome of Psychotherapy (1995). The main finding was that therapists who had experienced good parental care gave less interpretation (28% variance was accounted for). Therapists who had more negative introjects used a higher proportion of interpretation (16% variance was accounted for). Patients' pretreatment characteristics were not predictive of therapists' use of interpretation. The second focus was to investigate the impact of therapists' personality and the proportion of interpretation on the development of patients' maladaptive defensive functioning over the course of therapy. Better parental care and less negative introjects in therapists were associated with a positive influence and accounted for 5% variance in the reduction of patients' maladaptive defense.
Two proposed convergence criteria for Monte Carlo solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Pederson, S.P.; Booth, T.E.
1992-01-01
The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less
Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie
2018-05-01
The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Representativeness of laboratory sampling procedures for the analysis of trace metals in soil.
Dubé, Jean-Sébastien; Boudreault, Jean-Philippe; Bost, Régis; Sona, Mirela; Duhaime, François; Éthier, Yannic
2015-08-01
This study was conducted to assess the representativeness of laboratory sampling protocols for purposes of trace metal analysis in soil. Five laboratory protocols were compared, including conventional grab sampling, to assess the influence of sectorial splitting, sieving, and grinding on measured trace metal concentrations and their variability. It was concluded that grinding was the most important factor in controlling the variability of trace metal concentrations. Grinding increased the reproducibility of sample mass reduction by rotary sectorial splitting by up to two orders of magnitude. Combined with rotary sectorial splitting, grinding increased the reproducibility of trace metal concentrations by almost three orders of magnitude compared to grab sampling. Moreover, results showed that if grinding is used as part of a mass reduction protocol by sectorial splitting, the effect of sieving on reproducibility became insignificant. Gy's sampling theory and practice was also used to analyze the aforementioned sampling protocols. While the theoretical relative variances calculated for each sampling protocol qualitatively agreed with the experimental variances, their quantitative agreement was very poor. It was assumed that the parameters used in the calculation of theoretical sampling variances may not correctly estimate the constitutional heterogeneity of soils or soil-like materials. Finally, the results have highlighted the pitfalls of grab sampling, namely, the fact that it does not exert control over incorrect sampling errors and that it is strongly affected by distribution heterogeneity.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2005-01-01
To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…
Momentum Flux Determination Using the Multi-beam Poker Flat Incoherent Scatter Radar
NASA Technical Reports Server (NTRS)
Nicolls, M. J.; Fritts, D. C.; Janches, Diego; Heinselman, C. J.
2012-01-01
In this paper, we develop an estimator for the vertical flux of horizontal momentum with arbitrary beam pointing, applicable to the case of arbitrary but fixed beam pointing with systems such as the Poker Flat Incoherent Scatter Radar (PFISR). This method uses information from all available beams to resolve the variances of the wind field in addition to the vertical flux of both meridional and zonal momentum, targeted for high-frequency wave motions. The estimator utilises the full covariance of the distributed measurements, which provides a significant reduction in errors over the direct extension of previously developed techniques and allows for the calculation of an error covariance matrix of the estimated quantities. We find that for the PFISR experiment, we can construct an unbiased and robust estimator of the momentum flux if sufficient and proper beam orientations are chosen, which can in the future be optimized for the expected frequency distribution of momentum-containing scales. However, there is a potential trade-off between biases and standard errors introduced with the new approach, which must be taken into account when assessing the momentum fluxes. We apply the estimator to PFISR measurements on 23 April 2008 and 21 December 2007, from 60-85 km altitude, and show expected results as compared to mean winds and in relation to the measured vertical velocity variances.
Genung, Mark A; Fox, Jeremy; Williams, Neal M; Kremen, Claire; Ascher, John; Gibbs, Jason; Winfree, Rachael
2017-07-01
The relationship between biodiversity and the stability of ecosystem function is a fundamental question in community ecology, and hundreds of experiments have shown a positive relationship between species richness and the stability of ecosystem function. However, these experiments have rarely accounted for common ecological patterns, most notably skewed species abundance distributions and non-random extinction risks, making it difficult to know whether experimental results can be scaled up to larger, less manipulated systems. In contrast with the prolific body of experimental research, few studies have examined how species richness affects the stability of ecosystem services at more realistic, landscape scales. The paucity of these studies is due in part to a lack of analytical methods that are suitable for the correlative structure of ecological data. A recently developed method, based on the Price equation from evolutionary biology, helps resolve this knowledge gap by partitioning the effect of biodiversity into three components: richness, composition, and abundance. Here, we build on previous work and present the first derivation of the Price equation suitable for analyzing temporal variance of ecosystem services. We applied our new derivation to understand the temporal variance of crop pollination services in two study systems (watermelon and blueberry) in the mid-Atlantic United States. In both systems, but especially in the watermelon system, the stronger driver of temporal variance of ecosystem services was fluctuations in the abundance of common bee species, which were present at nearly all sites regardless of species richness. In contrast, temporal variance of ecosystem services was less affected by differences in species richness, because lost and gained species were rare. Thus, the findings from our more realistic landscapes differ qualitatively from the findings of biodiversity-stability experiments. © 2017 by the Ecological Society of America.
Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan
2017-01-01
In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.
Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data
NASA Astrophysics Data System (ADS)
Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.
The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.
Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter
2016-09-01
A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended.
Reduced African Easterly Wave Activity with Quadrupled CO 2 in the Superparameterized CESM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hannah, Walter M.; Aiyyer, Anantha
African easterly wave (AEW) activity is examined in quadrupled CO 2 experiments with the superparameterized CESM (SP-CESM). The variance of 2–10-day filtered precipitation increases with warming over the West African monsoon region, suggesting increased AEW activity. The perturbation enstrophy budget is used to investigate the dynamic signature of AEW activity. The northern wave track becomes more active associated with enhanced baroclinicity, consistent with previous studies. The southern track exhibits a surprising reduction of wave activity associated with less frequent occurrence of weak waves and a slight increase in the occurrence of strong waves. These changes are connected to changes inmore » the profile of vortex stretching and tilting that can be understood as interconnected consequences of increased static stability from the lapse rate response, weak temperature gradient balance, and the fixed anvil temperature hypothesis.« less
Manifold Learning by Preserving Distance Orders.
Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz
2014-03-01
Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.
Reduced African Easterly Wave Activity with Quadrupled CO 2 in the Superparameterized CESM
Hannah, Walter M.; Aiyyer, Anantha
2017-10-01
African easterly wave (AEW) activity is examined in quadrupled CO 2 experiments with the superparameterized CESM (SP-CESM). The variance of 2–10-day filtered precipitation increases with warming over the West African monsoon region, suggesting increased AEW activity. The perturbation enstrophy budget is used to investigate the dynamic signature of AEW activity. The northern wave track becomes more active associated with enhanced baroclinicity, consistent with previous studies. The southern track exhibits a surprising reduction of wave activity associated with less frequent occurrence of weak waves and a slight increase in the occurrence of strong waves. These changes are connected to changes inmore » the profile of vortex stretching and tilting that can be understood as interconnected consequences of increased static stability from the lapse rate response, weak temperature gradient balance, and the fixed anvil temperature hypothesis.« less
Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma
NASA Astrophysics Data System (ADS)
Seibert, Stanley; Latorre, Anthony
2012-03-01
We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.
Genetic and environmental influences on blood pressure variability: a study in twins.
Xu, Xiaojing; Ding, Xiuhua; Zhang, Xinyan; Su, Shaoyong; Treiber, Frank A; Vlietinck, Robert; Fagard, Robert; Derom, Catherine; Gielen, Marij; Loos, Ruth J F; Snieder, Harold; Wang, Xiaoling
2013-04-01
Blood pressure variability (BPV) and its reduction in response to antihypertensive treatment are predictors of clinical outcomes; however, little is known about its heritability. In this study, we examined the relative influence of genetic and environmental sources of variance of BPV and the extent to which it may depend on race or sex in young twins. Twins were enrolled from two studies. One study included 703 white twins (308 pairs and 87 singletons) aged 18-34 years, whereas another study included 242 white twins (108 pairs and 26 singletons) and 188 black twins (79 pairs and 30 singletons) aged 12-30 years. BPV was calculated from 24-h ambulatory blood pressure recording. Twin modeling showed similar results in the separate analysis in both twin studies and in the meta-analysis. Familial aggregation was identified for SBP variability (SBPV) and DBP variability (DBPV) with genetic factors and common environmental factors together accounting for 18-40% and 23-31% of the total variance of SBPV and DBPV, respectively. Unique environmental factors were the largest contributor explaining up to 82-77% of the total variance of SBPV and DBPV. No sex or race difference in BPV variance components was observed. The results remained the same after adjustment for 24-h blood pressure levels. The variance in BPV is predominantly determined by unique environment in youth and young adults, although familial aggregation due to additive genetic and/or common environment influences was also identified explaining about 25% of the variance in BPV.
Quantifying noise in optical tweezers by allan variance.
Czerwinski, Fabian; Richardson, Andrew C; Oddershede, Lene B
2009-07-20
Much effort is put into minimizing noise in optical tweezers experiments because noise and drift can mask fundamental behaviours of, e.g., single molecule assays. Various initiatives have been taken to reduce or eliminate noise but it has been difficult to quantify their effect. We propose to use Allan variance as a simple and efficient method to quantify noise in optical tweezers setups.We apply the method to determine the optimal measurement time, frequency, and detection scheme, and quantify the effect of acoustic noise in the lab. The method can also be used on-the-fly for determining optimal parameters of running experiments.
Technical note: Application of the Box-Cox data transformation to animal science experiments.
Peltier, M R; Wilcox, C J; Sharp, D C
1998-03-01
In the use of ANOVA for hypothesis testing in animal science experiments, the assumption of homogeneity of errors often is violated because of scale effects and the nature of the measurements. We demonstrate a method for transforming data so that the assumptions of ANOVA are met (or violated to a lesser degree) and apply it in analysis of data from a physiology experiment. Our study examined whether melatonin implantation would affect progesterone secretion in cycling pony mares. Overall treatment variances were greater in the melatonin-treated group, and several common transformation procedures failed. Application of the Box-Cox transformation algorithm reduced the heterogeneity of error and permitted the assumption of equal variance to be met.
Establishing the situated features associated with perceived stress
Lebois, Lauren A.M.; Hertzog, Christopher; Slavich, George M.; Barrett, Lisa Feldman; Barsalou, Lawrence W.
2016-01-01
We propose that the domain general process of categorization contributes to the perception of stress. When a situation contains features associated with stressful experiences, it is categorized as stressful. From the perspective of situated cognition, the features used to categorize experiences as stressful are the features typically true of stressful situations. To test this hypothesis, we asked participants to evaluate the perceived stress of 572 imagined situations, and to also evaluate each situation for how much it possessed 19 features potentially associated with stressful situations and their processing (e.g., self-threat, familiarity, visual imagery, outcome certainty). Following variable reduction through factor analysis, a core set of 8 features associated with stressful situations—expectation violation, self-threat, coping efficacy, bodily experience, arousal, negative valence, positive valence, and perseveration—all loaded on a single Core Stress Features factor. In a multilevel model, this factor and an Imagery factor explained 88% of the variance in judgments of perceived stress, with significant random effects reflecting differences in how individual participants categorized stress. These results support the hypothesis that people categorize situations as stressful to the extent that typical features of stressful situations are present. To our knowledge, this is the first attempt to establish a comprehensive set of features that predicts perceived stress. PMID:27288834
Analysis of Darwin Rainfall Data: Implications on Sampling Strategy
NASA Technical Reports Server (NTRS)
Rafael, Qihang Li; Bras, Rafael L.; Veneziano, Daniele
1996-01-01
Rainfall data collected by radar in the vicinity of Darwin, Australia, have been analyzed in terms of their mean, variance, autocorrelation of area-averaged rain rate, and diurnal variation. It is found that, when compared with the well-studied GATE (Global Atmospheric Research Program Atlantic Tropical Experiment) data, Darwin rainfall has larger coefficient of variation (CV), faster reduction of CV with increasing area size, weaker temporal correlation, and a strong diurnal cycle and intermittence. The coefficient of variation for Darwin rainfall has larger magnitude and exhibits larger spatial variability over the sea portion than over the land portion within the area of radar coverage. Stationary, and nonstationary models have been used to study the sampling errors associated with space-based rainfall measurement. The nonstationary model shows that the sampling error is sensitive to the starting sampling time for some sampling frequencies, due to the diurnal cycle of rain, but not for others. Sampling experiments using data also show such sensitivity. When the errors are averaged over starting time, the results of the experiments and the stationary and nonstationary models match each other very closely. In the small areas for which data are available for I>oth Darwin and GATE, the sampling error is expected to be larger for Darwin due to its larger CV.
Influence of rumen protozoa on methane emission in ruminants: a meta-analysis approach.
Guyader, J; Eugène, M; Nozière, P; Morgavi, D P; Doreau, M; Martin, C
2014-11-01
A meta-analysis was conducted to evaluate the effects of protozoa concentration on methane emission from ruminants. A database was built from 59 publications reporting data from 76 in vivo experiments. The experiments included in the database recorded methane production and rumen protozoa concentration measured on the same groups of animals. Quantitative data such as diet chemical composition, rumen fermentation and microbial parameters, and qualitative information such as methane mitigation strategies were also collected. In the database, 31% of the experiments reported a concomitant reduction of both protozoa concentration and methane emission (g/kg dry matter intake). Nearly all of these experiments tested lipids as methane mitigation strategies. By contrast, 21% of the experiments reported a variation in methane emission without changes in protozoa numbers, indicating that methanogenesis is also regulated by other mechanisms not involving protozoa. Experiments that used chemical compounds as an antimethanogenic treatment belonged to this group. The relationship between methane emission and protozoa concentration was studied with a variance-covariance model, with experiment as a fixed effect. The experiments included in the analysis had a within-experiment variation of protozoa concentration higher than 5.3 log10 cells/ml corresponding to the average s.e.m. of the database for this variable. To detect potential interfering factors for the relationship, the influence of several qualitative and quantitative secondary factors was tested. This meta-analysis showed a significant linear relationship between methane emission and protozoa concentration: methane (g/kg dry matter intake)=-30.7+8.14×protozoa (log10 cells/ml) with 28 experiments (91 treatments), residual mean square error=1.94 and adjusted R 2=0.90. The proportion of butyrate in the rumen positively influenced the least square means of this relationship.
The Relationship between Social Capital in Hospitals and Physician Job Satisfaction
Ommen, Oliver; Driller, Elke; Köhler, Thorsten; Kowalski, Christoph; Ernstmann, Nicole; Neumann, Melanie; Steffen, Petra; Pfaff, Holger
2009-01-01
Background Job satisfaction in the hospital is an important predictor for many significant management ratios. Acceptance in professional life or high workload are known as important predictors for job satisfaction. The influence of social capital in hospitals on job satisfaction within the health care system, however, remains to be determined. Thus, this article aimed at analysing the relationship between overall job satisfaction of physicians and social capital in hospitals. Methods The results of this study are based upon questionnaires sent by mail to 454 physicians working in the field of patient care in 4 different German hospitals in 2002. 277 clinicians responded to the poll, for a response rate of 61%. Analysis was performed using three linear regression models with physician overall job satisfaction as the dependent variable and age, gender, professional experience, workload, and social capital as independent variables. Results The first regression model explained nearly 9% of the variance of job satisfaction. Whereas job satisfaction increased slightly with age, gender and professional experience were not identified as significant factors to explain the variance. Setting up a second model with the addition of subjectively-perceived workload to the analysis, the explained variance increased to 18% and job satisfaction decreased significantly with increasing workload. The third model including social capital in hospital explained 36% of the variance with social capital, professional experience and workload as significant factors. Conclusion This analysis demonstrated that the social capital of an organisation, in addition to professional experience and workload, represents a significant predictor of overall job satisfaction of physicians working in the field of patient care. Trust, mutual understanding, shared aims, and ethical values are qualities of social capital that unify members of social networks and communities and enable them to act cooperatively. PMID:19445692
Physical heterogeneity control on effective mineral dissolution rates
NASA Astrophysics Data System (ADS)
Jung, Heewon; Navarre-Sitchler, Alexis
2018-04-01
Hydrologic heterogeneity may be an important factor contributing to the discrepancy in laboratory and field measured dissolution rates, but the governing factors influencing mineral dissolution rates among various representations of physical heterogeneity remain poorly understood. Here, we present multiple reactive transport simulations of anorthite dissolution in 2D latticed random permeability fields and link the information from local grid scale (1 cm or 4 m) dissolution rates to domain-scale (1m or 400 m) effective dissolution rates measured by the flux-weighted average of an ensemble of flow paths. We compare results of homogeneous models to heterogeneous models with different structure and layered permeability distributions within the model domain. Chemistry is simplified to a single dissolving primary mineral (anorthite) distributed homogeneously throughout the domain and a single secondary mineral (kaolinite) that is allowed to dissolve or precipitate. Results show that increasing size in correlation structure (i.e. long integral scales) and high variance in permeability distribution are two important factors inducing a reduction in effective mineral dissolution rates compared to homogeneous permeability domains. Larger correlation structures produce larger zones of low permeability where diffusion is an important transport mechanism. Due to the increased residence time under slow diffusive transport, the saturation state of a solute with respect to a reacting mineral approaches equilibrium and reduces the reaction rate. High variance in permeability distribution favorably develops large low permeability zones that intensifies the reduction in mixing and effective dissolution rate. However, the degree of reduction in effective dissolution rate observed in 1 m × 1 m domains is too small (<1% reduction from the corresponding homogeneous case) to explain several orders of magnitude reduction observed in many field studies. When multimodality in permeability distribution is approximated by high permeability variance in 400 m × 400 m domains, the reduction in effective dissolution rate increases due to the effect of long diffusion length scales through zones with very slow reaction rates. The observed scale dependence becomes complicated when pH dependent kinetics are compared to the results from pH independent rate constants. In small domains where the entire domain is reactive, faster anorthite dissolution rates and slower kaolinite precipitation rates relative to pH independent rates at far-from-equilibrium conditions reduce the effective dissolution rate by increasing the saturation state. However, in large domains where less- or non-reactive zones develop, higher kaolinite precipitation rates in less reactive zones increase the effective anorthite dissolution rates relative to the rates observed in pH independent cases.
Jagsi, Reshma; Weinstein, Debra F; Shapiro, Jo; Kitch, Barrett T; Dorer, David; Weissman, Joel S
2008-03-10
Limiting resident work hours may improve patient safety, but unintended adverse effects are also possible. We sought to assess the impact of Accreditation Council for Graduate Medical Education resident work hour limits implemented on July 1, 2003, on resident experiences and perceptions regarding patient safety. All trainees in 76 accredited programs at 2 teaching hospitals were surveyed in 2003 (preimplementation) and 2004 (postimplementation) regarding their work hours and patient load; perceived relation of work hours, patient load, and fatigue to patient safety; and experiences with adverse events and medical errors. Based on reported weekly duty hours, 13 programs experiencing substantial hours reductions were classified into a "reduced-hours" group. Change scores in outcome measures before and after policy implementation in the reduced-hours programs were compared with those in "other programs" to control for temporal trends, using 2-way analysis of variance with interaction. A total of 1770 responses were obtained (response rate, 60.0%). Analysis was restricted to 1498 responses from respondents in clinical years of training. Residents in the reduced-hours group reported significant reductions in mean weekly duty hours (from 76.6 to 68.0 hours, P < .001), and the percentage working more than 80 hours per week decreased from 44.0% to 16.6% (P < .001). No significant increases in patient load while on call (patients admitted, covered, or cross covered) were observed. Between 2003 and 2004, there was a decrease in the proportion of residents in the reduced-hours programs indicating that working too many hours (63.2% vs 44.0%; P < .001) or cross covering too many patients (65.9% vs 46.9%; P = .001) contributed to mistakes in patient care. There were no significant reductions in these 2 measures in the other group, and the differences in differences were significant (P = .03 and P = .02, respectively). The number of residents in reduced-hours programs who reported committing at least 1 medical error within the past week remained high in both study years (32.9% in 2003 and 26.3% in 2004, P = .27). It is possible to reduce residents' hours without increasing patient load. Doing so may reduce the extent to which fatigue affects patient safety as perceived by these frontline providers.
Soriano, Jaymar; Kubo, Takatomi; Inoue, Takao; Kida, Hiroyuki; Yamakawa, Toshitaka; Suzuki, Michiyasu; Ikeda, Kazushi
2017-10-01
Experiments with drug-induced epilepsy in rat brains and epileptic human brain region reveal that focal cooling can suppress epileptic discharges without affecting the brain's normal neurological function. Findings suggest a viable treatment for intractable epilepsy cases via an implantable cooling device. However, precise mechanisms by which cooling suppresses epileptic discharges are still not clearly understood. Cooling experiments in vitro presented evidence of reduction in neurotransmitter release from presynaptic terminals and loss of dendritic spines at post-synaptic terminals offering a possible synaptic mechanism. We show that termination of epileptic discharges is possible by introducing a homogeneous temperature factor in a neural mass model which attenuates the post-synaptic impulse responses of the neuronal populations. This result however may be expected since such attenuation leads to reduced post-synaptic potential and when the effect on inhibitory interneurons is less than on excitatory interneurons, frequency of firing of pyramidal cells is consequently reduced. While this is observed in cooling experiments in vitro, experiments in vivo exhibit persistent discharges during cooling but suppressed in magnitude. This leads us to conjecture that reduction in the frequency of discharges may be compensated through intrinsic excitability mechanisms. Such compensatory mechanism is modelled using a reciprocal temperature factor in the firing response function in the neural mass model. We demonstrate that the complete model can reproduce attenuation of both magnitude and frequency of epileptic discharges during cooling. The compensatory mechanism suggests that cooling lowers the average and the variance of the distribution of threshold potential of firing across the population. Bifurcation study with respect to the temperature parameters of the model reveals how heterogeneous response of epileptic discharges to cooling (termination or suppression only) is exhibited. Possibility of differential temperature effects on post-synaptic potential generation of different populations is also explored.
Inoue, Takao; Kida, Hiroyuki; Yamakawa, Toshitaka; Suzuki, Michiyasu
2017-01-01
Experiments with drug-induced epilepsy in rat brains and epileptic human brain region reveal that focal cooling can suppress epileptic discharges without affecting the brain’s normal neurological function. Findings suggest a viable treatment for intractable epilepsy cases via an implantable cooling device. However, precise mechanisms by which cooling suppresses epileptic discharges are still not clearly understood. Cooling experiments in vitro presented evidence of reduction in neurotransmitter release from presynaptic terminals and loss of dendritic spines at post-synaptic terminals offering a possible synaptic mechanism. We show that termination of epileptic discharges is possible by introducing a homogeneous temperature factor in a neural mass model which attenuates the post-synaptic impulse responses of the neuronal populations. This result however may be expected since such attenuation leads to reduced post-synaptic potential and when the effect on inhibitory interneurons is less than on excitatory interneurons, frequency of firing of pyramidal cells is consequently reduced. While this is observed in cooling experiments in vitro, experiments in vivo exhibit persistent discharges during cooling but suppressed in magnitude. This leads us to conjecture that reduction in the frequency of discharges may be compensated through intrinsic excitability mechanisms. Such compensatory mechanism is modelled using a reciprocal temperature factor in the firing response function in the neural mass model. We demonstrate that the complete model can reproduce attenuation of both magnitude and frequency of epileptic discharges during cooling. The compensatory mechanism suggests that cooling lowers the average and the variance of the distribution of threshold potential of firing across the population. Bifurcation study with respect to the temperature parameters of the model reveals how heterogeneous response of epileptic discharges to cooling (termination or suppression only) is exhibited. Possibility of differential temperature effects on post-synaptic potential generation of different populations is also explored. PMID:28981509
The magnitude and colour of noise in genetic negative feedback systems.
Voliotis, Margaritis; Bowsher, Clive G
2012-08-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.
NASA Astrophysics Data System (ADS)
Almosallam, Ibrahim A.; Jarvis, Matt J.; Roberts, Stephen J.
2016-10-01
The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent, commonly known in statistics as heteroscedastic noise. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform data density and in most cases require manual tuning of many parameters. In this paper, we present a Bayesian machine learning approach that jointly optimizes the model with respect to both the predictive mean and variance we refer to as Gaussian processes for photometric redshifts (GPZ). The predictive variance of the model takes into account both the variance due to data density and photometric noise. Using the Sloan Digital Sky Survey (SDSS) DR12 data, we show that our approach substantially outperforms other machine learning methods for photo-z estimation and their associated variance, such as TPZ and ANNZ2. We provide a MATLAB and PYTHON implementations that are available to download at https://github.com/OxfordML/GPz.
Compression of Morbidity and Mortality: New Perspectives1
Stallard, Eric
2017-01-01
Compression of morbidity is a reduction over time in the total lifetime days of chronic disability, reflecting a balance between (1) morbidity incidence rates and (2) case-continuance rates—generated by case-fatality and case-recovery rates. Chronic disability includes limitations in activities of daily living and cognitive impairment, which can be covered by long-term care insurance. Morbidity improvement can lead to a compression of morbidity if the reductions in age-specific prevalence rates are sufficiently large to overcome the increases in lifetime disability due to concurrent mortality improvements and progressively higher disability prevalence rates with increasing age. Compression of mortality is a reduction over time in the variance of age at death. Such reductions are generally accompanied by increases in the mean age at death; otherwise, for the variances to decrease, the death rates above the mean age at death would need to increase, and this has rarely been the case. Mortality improvement is a reduction over time in the age-specific death rates and a corresponding increase in the cumulative survival probabilities and age-specific residual life expectancies. Mortality improvement does not necessarily imply concurrent compression of mortality. This paper reviews these concepts, describes how they are related, shows how they apply to changes in mortality over the past century and to changes in morbidity over the past 30 years, and discusses their implications for future changes in the United States. The major findings of the empirical analyses are the substantial slowdowns in the degree of mortality compression over the past half century and the unexpectedly large degree of morbidity compression that occurred over the morbidity/disability study period 1984–2004; evidence from other published sources suggests that morbidity compression may be continuing. PMID:28740358
Adaptive cyclic physiologic noise modeling and correction in functional MRI.
Beall, Erik B
2010-03-30
Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses. Copyright (c) 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hill, Emma M.; Ponte, Rui M.; Davis, James L.
2007-01-01
Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.
Predicting attitudes toward seeking professional psychological help among Alaska Natives.
Freitas-Murrell, Brittany; Swift, Joshua K
2015-01-01
This study sought to examine the role of current/previous treatment experience, stigma (social and self), and cultural identification (Caucasian and Alaska Native [AN]) in predicting attitudes toward psychological help seeking for ANs. Results indicated that these variables together explained roughly 56% of variance in attitudes. In particular, while self-stigma and identification with the Caucasian culture predicted a unique amount of variance in help-seeking attitudes, treatment use and identification with AN culture did not. The results of this study indicate that efforts to address the experience of self-stigma may prove most useful to improving help-seeking attitudes in ANs.
Analytical and experimental design and analysis of an optimal processor for image registration
NASA Technical Reports Server (NTRS)
Mcgillem, C. D. (Principal Investigator); Svedlow, M.; Anuta, P. E.
1976-01-01
The author has identified the following significant results. A quantitative measure of the registration processor accuracy in terms of the variance of the registration error was derived. With the appropriate assumptions, the variance was shown to be inversely proportional to the square of the effective bandwidth times the signal to noise ratio. The final expressions were presented to emphasize both the form and simplicity of their representation. In the situation where relative spatial distortions exist between images to be registered, expressions were derived for estimating the loss in output signal to noise ratio due to these spatial distortions. These results are in terms of a reduction factor.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-05
... (OMB) for review, as required by the Paperwork Reduction Act. The Department is soliciting public... resultant costs also serve to further stabilize the mortgage insurance premiums charged by FHA and the... Insurance Benefits, HUD-90035 Information/Disclosure, HUD-90041 Request for Variance, Pre-foreclosure sale...
Decomposition of Some Well-Known Variance Reduction Techniques. Revision.
1985-05-01
34use a family of transformatlom to convert given samples into samples conditioned on a given characteristic (p. 04)." Dub and Horowitz (1979), Granovsky ...34Antithetic Varlates Revisited," Commun. ACM 26, 11, 064-971. Granovsky , B.L. (1981), "Optimal Formulae of the Conditional Monte Carlo," SIAM J. Alg
NASA Astrophysics Data System (ADS)
Llovet, X.; Salvat, F.
2018-01-01
The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.
Aesthetic judgement of orientation in modern art.
Mather, George
2012-01-01
When creating an artwork, the artist makes a decision regarding the orientation at which the work is to be hung based on their aesthetic judgement and the message conveyed by the piece. Is the impact or aesthetic appeal of a work diminished when it is hung at an incorrect orientation? To investigate this question, Experiment 1 asked whether naïve observers can appreciate the correct orientation (as defined by the artist) of 40 modern artworks, some of which are entirely abstract. Eighteen participants were shown 40 paintings in a series of trials. Each trial presented all four cardinal orientations on a computer screen, and the participant was asked to select the orientation that was most attractive or meaningful. Results showed that the correct orientation was selected in 48% of trials on average, significantly above the 25% chance level, but well below perfect performance. A second experiment investigated the extent to which the 40 paintings contained recognisable content, which may have mediated orientation judgements. Recognition rates varied from 0% for seven of the paintings to 100% for five paintings. Orientation judgements in Experiment 1 correlated significantly with "meaningful" content judgements in Experiment 2: 42% of the variance in orientation judgements in Experiment 1 was shared with recognition of meaningful content in Experiment 2. For the seven paintings in which no meaningful content at all was detected, 41% of the variance in orientation judgements was shared with variance in a physical measure of image content, Fourier amplitude spectrum slope. For some paintings, orientation judgements were quite consistent, despite a lack of meaningful content. The origin of these orientation judgements remains to be identified.
Solomon, Joshua A.
2007-01-01
To explain the relationship between first- and second-response accuracies in a detection experiment, Swets, Tanner, and Birdsall [Swets, J., Tanner, W. P., Jr., & Birdsall, T. G. (1961). Decision processes in perception. Psychological Review, 68, 301–340] proposed that the variance of visual signals increased with their means. However, both a low threshold and intrinsic uncertainty produce similar relationships. I measured the relationship between first- and second-response accuracies for suprathreshold contrast discrimination, which is thought to be unaffected by sensory thresholds and intrinsic uncertainty. The results are consistent with a slowly increasing variance. PMID:17961625
New graduate nurse transition programs and clinical leadership skills in novice RNs.
Chappell, Kathy B; Richards, Kathy C; Barnett, Scott D
2014-12-01
The objective of this study was to determine predictors of clinical leadership skill (CLS) for RNs with 24 months of clinical experience or less. New graduate nurse transition programs (NGNTPs) have been proposed as a strategy to increase CLS. CLS is associated with positive patient outcomes. Method used was hierarchical regression modeling to evaluate predictors of CLS among individual characteristics of RNs and characteristics of NGNTPs. Perceived overall quality of an NGNTP was the strongest predictor of CLS (R = 0.041, P < .01). Clinical experience and NGNTP characteristics accounted for 6.9% of the variance in CLS and 12.6% of the variance among RNs with assigned mentors (P < .01). RNs participating in NGNTPs for more than 24 weeks were 21 times more likely to remain employed within the organization when compared with NGNTPs of 12 weeks or less, a significant cost-benefit to the organization. Although perceived overall quality of a NGNTP was the strongest predictor of CLS, much of the variance in CLS remains unexplained.
Fish play Minority Game as humans do
NASA Astrophysics Data System (ADS)
Liu, Ruey-Tarng; Chung, Fei Fang; Liaw, Sy-Sang
2012-01-01
We report the results of an unprecedented real Minority Game (MG) played by university staff members who clicked one of two identical buttons (A and B) on a computer screen while clocking in or out of work. We recorded the number of people who clicked button A for 1288 games, beginning on April 21, 2008 and ending on October 31, 2010, and calculated the variance among the people who clicked A as a function of time. The evolution of the variance shows that the global gain of selfish agents increases when a small portion of agents make persistent choice in the games. We also carried out another experiment in which we forced 101 fish to enter one of the two symmetric chambers (A and B). We repeated the fish experiment 500 times and found that the variance of the number of fish that entered chamber A evolved in a way similar to the human MG, suggesting that fish have memory and can employ more strategies when facing the same situation again and again.
FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport
NASA Astrophysics Data System (ADS)
Munk, Madicken
The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.
Robust prediction of protein subcellular localization combining PCA and WSVMs.
Tian, Jiang; Gu, Hong; Liu, Wenqi; Gao, Chiyang
2011-08-01
Automated prediction of protein subcellular localization is an important tool for genome annotation and drug discovery, and Support Vector Machines (SVMs) can effectively solve this problem in a supervised manner. However, the datasets obtained from real experiments are likely to contain outliers or noises, which can lead to poor generalization ability and classification accuracy. To explore this problem, we adopt strategies to lower the effect of outliers. First we design a method based on Weighted SVMs, different weights are assigned to different data points, so the training algorithm will learn the decision boundary according to the relative importance of the data points. Second we analyse the influence of Principal Component Analysis (PCA) on WSVM classification, propose a hybrid classifier combining merits of both PCA and WSVM. After performing dimension reduction operations on the datasets, kernel-based possibilistic c-means algorithm can generate more suitable weights for the training, as PCA transforms the data into a new coordinate system with largest variances affected greatly by the outliers. Experiments on benchmark datasets show promising results, which confirms the effectiveness of the proposed method in terms of prediction accuracy. Copyright © 2011 Elsevier Ltd. All rights reserved.
Perceived self-competence and relationship experiences in inpatient psychotherapy - a pilot study
Sammet, Isa; Häfner, Steffen; Leibing, Eric; Lüneburg, Tim; Schauenburg, Henning
2007-01-01
Objective: The patient’s sense of capability in mastering future challenges (“self-competence”) represents an important therapeutic target. To date, empirical findings concerning the influence of the therapeutic relationship on perceived self-competence remain scarce. Against this backdrop, mutual associations between perceived self-competence, symptom distress and various relationship experiences within inpatient psychotherapy are investigated. Methods: 219 inpatients with heterogeneous diagnoses completed the SCL-90-R, the Relationship Questionnaire RQ1 and the Inventory of Interpersonal Problems IIP prior to therapy. Self-competence and relationships to the individual therapist, therapeutic team and fellow patients were assessed weekly using an inpatient questionnaire (SEB). Results: As expected, there were significant negative correlations between self-competence and symptom distress. Patients with more “fearfully avoidant” behavior upon admission experienced relationships during therapy as significantly more negative. Conversely, the quality of relationships to the individual therapist and fellow patients was predictive of a significant part of variance in self-competence upon discharge. Conclusions: A model of mutual interactions is proposed for the variables under investigation. Results suggest that the positive association between the therapeutic relationship and symptom reduction could partly be explained by an improvement in perceived self-competence. PMID:19742295
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Risk factors of chronic periodontitis on healing response: a multilevel modelling analysis.
Song, J; Zhao, H; Pan, C; Li, C; Liu, J; Pan, Y
2017-09-15
Chronic periodontitis is a multifactorial polygenetic disease with an increasing number of associated factors that have been identified over recent decades. Longitudinal epidemiologic studies have demonstrated that the risk factors were related to the progression of the disease. A traditional multivariate regression model was used to find risk factors associated with chronic periodontitis. However, the approach requirement of standard statistical procedures demands individual independence. Multilevel modelling (MLM) data analysis has widely been used in recent years, regarding thorough hierarchical structuring of the data, decomposing the error terms into different levels, and providing a new analytic method and framework for solving this problem. The purpose of our study is to investigate the relationship of clinical periodontal index and the risk factors in chronic periodontitis through MLM analysis and to identify high-risk individuals in the clinical setting. Fifty-four patients with moderate to severe periodontitis were included. They were treated by means of non-surgical periodontal therapy, and then made follow-up visits regularly at 3, 6, and 12 months after therapy. Each patient answered a questionnaire survey and underwent measurement of clinical periodontal parameters. Compared with baseline, probing depth (PD) and clinical attachment loss (CAL) improved significantly after non-surgical periodontal therapy with regular follow-up visits at 3, 6, and 12 months after therapy. The null model and variance component models with no independent variables included were initially obtained to investigate the variance of the PD and CAL reductions across all three levels, and they showed a statistically significant difference (P < 0.001), thus establishing that MLM data analysis was necessary. Site-level had effects on PD and CAL reduction; those variables could explain 77-78% of PD reduction and 70-80% of CAL reduction at 3, 6, and 12 months. Other levels only explain 20-30% of PD and CAL reductions. Site-level had the greatest effect on PD and CAL reduction. Non-surgical periodontal therapy with regular follow-up visits had a remarkable curative effect. All three levels had a substantial influence on the reduction of PD and CAL. Site-level had the largest effect on PD and CAL reductions.
Zendjidjian, X Y; Auquier, P; Lançon, C; Loundou, A; Parola, N; Faugère, M; Boyer, L
2015-01-01
The aim of our study was to develop a specific French self-administered instrument for measuring hospitalized patients' satisfaction in psychiatry based on exclusive patient point of view: the SATISPSY-22. The development of the SATISPSY was undertaken in three steps: item generation, item reduction, and validation. The content of the SATISPSY was derived from 80 interviews with patients hospitalized in psychiatry. Using item response and classical test theories, item reduction was performed in 2 hospitals on 270 responders. The validation was based on construct validity, reliability, and some aspects of external validity. The SATISPSY contains 22 items describing 6 dimensions (staff, quality of care, personal experience, information, activity, and food). The six-factor structure accounted for 78.0% of the total variance. Each item achieved the 0.40 standard for item-internal consistency, and the Cronbach's alpha coefficients were>0.70. Scores of dimensions were strongly positively correlated with Visual Analogue Scale scores. Significant associations with socioeconomic and clinical indicators showed good discriminant and external validity. INFIT statistics were ranged from 0.71 to 1.25. The SATISPSY-22 presents satisfactory psychometric properties, enabling patient feedback to be incorporated in a continuous quality health care improvement strategy. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Health status, job stress and work-related injury among Los Angeles taxi drivers.
Wang, Pin-Chieh; Delp, Linda
2014-01-01
Taxi drivers work long hours for low wages and report hypertension, weight gain, and musculoskeletal pain associated with the sedentary nature of their job, stressful working conditions, and poor dietary habits. They also experience a high work-related fatality rate. The objective of this study is to examine the association of taxi drivers' health status and level of job stress with work-related injury and determine if a potential interaction exists. A survey of 309 Los Angeles taxi drivers provides basic data on health status, job stress, and work-related injuries. We further analyzed the data using a Modified Poisson regression approach with a robust error variance to estimate the relative risk (RR) and the 95% confidence intervals (CI) of work-related injuries. Focus group results supplemented and helped interpret the quantitative data. The joint effect of good health and low job stress was associated with a large reduction in the incidence of injuries, consistent with the hypothesis that health status and stress levels modify each other on the risk of work-related injury. These results suggest that the combination of stress reduction and health management programs together with changes in the stressful conditions of the job may provide targeted avenues to prevent injuries.
Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.
Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less
Improved Hybrid Modeling of Spent Fuel Storage Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bibber, Karl van
This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less
Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell
Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.; ...
2017-07-26
Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less
Okada, Kensuke; Hoshino, Takahiro
2017-04-01
In psychology, the reporting of variance-accounted-for effect size indices has been recommended and widely accepted through the movement away from null hypothesis significance testing. However, most researchers have paid insufficient attention to the fact that effect sizes depend on the choice of the number of levels and their ranges in experiments. Moreover, the functional form of how and how much this choice affects the resultant effect size has not thus far been studied. We show that the relationship between the population effect size and number and range of levels is given as an explicit function under reasonable assumptions. Counterintuitively, it is found that researchers may affect the resultant effect size to be either double or half simply by suitably choosing the number of levels and their ranges. Through a simulation study, we confirm that this relation also applies to sample effect size indices in much the same way. Therefore, the variance-accounted-for effect size would be substantially affected by the basic research design such as the number of levels. Simple cross-study comparisons and a meta-analysis of variance-accounted-for effect sizes would generally be irrational unless differences in research designs are explicitly considered.
Gerster, Samuel; Namer, Barbara; Elam, Mikael
2017-01-01
Abstract Skin conductance responses (SCR) are increasingly analyzed with model‐based approaches that assume a linear and time‐invariant (LTI) mapping from sudomotor nerve (SN) activity to observed SCR. These LTI assumptions have previously been validated indirectly, by quantifying how much variance in SCR elicited by sensory stimulation is explained under an LTI model. This approach, however, collapses sources of variability in the nervous and effector organ systems. Here, we directly focus on the SN/SCR mapping by harnessing two invasive methods. In an intraneural recording experiment, we simultaneously track SN activity and SCR. This allows assessing the SN/SCR relationship but possibly suffers from interfering activity of non‐SN sympathetic fibers. In an intraneural stimulation experiment under regional anesthesia, such influences are removed. In this stimulation experiment, about 95% of SCR variance is explained under LTI assumptions when stimulation frequency is below 0.6 Hz. At higher frequencies, nonlinearities occur. In the intraneural recording experiment, explained SCR variance is lower, possibly indicating interference from non‐SN fibers, but higher than in our previous indirect tests. We conclude that LTI systems may not only be a useful approximation but in fact a rather accurate description of biophysical reality in the SN/SCR system, under conditions of low baseline activity and sporadic external stimuli. Intraneural stimulation under regional anesthesia is the most sensitive method to address this question. PMID:28862764
Vicarious resilience in sexual assault and domestic violence advocates.
Frey, Lisa L; Beesley, Denise; Abbott, Deah; Kendrick, Elizabeth
2017-01-01
There is little research related to sexual assault and domestic violence advocates' experiences, with the bulk of the literature focused on stressors and systemic barriers that negatively impact efforts to assist survivors. However, advocates participating in these studies have also emphasized the positive impact they experience consequent to their work. This study explores the positive impact. Vicarious resilience, personal trauma experiences, peer relational quality, and perceived organizational support in advocates (n = 222) are examined. Also, overlap among the conceptual components of vicarious resilience is explored. The first set of multiple regressions showed that personal trauma experiences and peer relational health predicted compassion satisfaction and vicarious posttraumatic growth, with organizational support predicting only compassion satisfaction. The second set of multiple regressions showed that (a) there was significant shared variance between vicarious posttraumatic growth and compassion satisfaction; (b) after accounting for vicarious posttraumatic growth, organizational support accounted for significant variance in compassion satisfaction; and (c) after accounting for compassion satisfaction, peer relational health accounted for significant variance in vicarious posttraumatic growth. Results suggest that it may be more meaningful to conceptualize advocates' personal growth related to their work through the lens of a multidimensional construct such as vicarious resilience. Organizational strategies promoting vicarious resilience (e.g., shared organizational power, training components) are offered, and the value to trauma-informed care of fostering advocates' vicarious resilience is discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Volz, Magdalena S; Farmer, Annabelle; Siegmund, Britta
2016-02-01
Inflammatory bowel disease (IBD) is frequently associated with chronic abdominal pain (CAP). Transcranial direct current stimulation (tDCS) has been proven to reduce chronic pain. This study aimed to investigate the effects of tDCS in patients with CAP due to IBD. This randomized, sham-controlled, double blind, parallel-designed study included 20 patients with either Crohn disease or ulcerative colitis with CAP (≥3/10 on the visual analog scale (VAS) in 3/6 months). Anodal or sham tDCS was applied over the primary motor cortex for 5 consecutive days (2 mA, 20 minutes). Assessments included VAS, pressure pain threshold, inflammatory markers, and questionnaires on quality of life, functional and disease specific symptoms (Irritable Bowel Syndrome-Severity Scoring System [IBS-SSS]), disease activity, and pain catastrophizing. Follow-up data were collected 1 week after the end of the stimulation. Statistical analyses were performed using analysis of variance and t tests. There was a significant reduction of abdominal pain in the anodal tDCS group compared with sham tDCS. This effect was evident in changes in VAS and pressure pain threshold on the left and right sides of the abdomen. In addition, 1 week after stimulation, pain reduction remained significantly decreased in the right side of the abdomen. There was also a significant reduction in scores on pain catastrophizing and on IBS-SSS when comparing both groups. Inflammatory markers and disease activity did not differ significantly between groups throughout the experiment. Transcranial direct current stimulation proved to be an effective and clinically relevant therapeutic strategy for CAP in IBD. The analgesic effects observed are unrelated to inflammation and disease activity, which emphasizes central pain mechanisms in CAP.
Fermentation and Hydrogen Metabolism Affect Uranium Reduction by Clostridia
Gao, Weimin; Francis, Arokiasamy J.
2013-01-01
Previously, it has been shown that not only is uranium reduction under fermentation condition common among clostridia species, but also the strains differed in the extent of their capability and the pH of the culture significantly affected uranium(VI) reduction. In this study, using HPLC and GC techniques, metabolic properties of those clostridial strains active in uranium reduction under fermentation conditions have been characterized and their effects on capability variance of uranium reduction discussed. Then, the relationship between hydrogen metabolism and uranium reduction has been further explored and the important role played by hydrogenase in uranium(VI) and iron(III) reduction by clostridiamore » demonstrated. When hydrogen was provided as the headspace gas, uranium(VI) reduction occurred in the presence of whole cells of clostridia. This is in contrast to that of nitrogen as the headspace gas. Without clostridia cells, hydrogen alone could not result in uranium(VI) reduction. In alignment with this observation, it was also found that either copper(II) addition or iron depletion in the medium could compromise uranium reduction by clostridia. In the end, a comprehensive model was proposed to explain uranium reduction by clostridia and its relationship to the overall metabolism especially hydrogen (H 2 ) production.« less
Comparing transformation methods for DNA microarray data
Thygesen, Helene H; Zwinderman, Aeilko H
2004-01-01
Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method. PMID:15202953
Comparing transformation methods for DNA microarray data.
Thygesen, Helene H; Zwinderman, Aeilko H
2004-06-17
When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
Integrating mean and variance heterogeneities to identify differentially expressed genes.
Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen
2016-12-06
In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment-wide significant MVDE genes. Our results indicate tremendous potential gain of integrating informative variance heterogeneity after adjusting for global confounders and background data structure. The proposed informative integration test better summarizes the impacts of condition change on expression distributions of susceptible genes than do the existent competitors. Therefore, particular attention should be paid to explicitly exploit the variance heterogeneity induced by condition change in functional genomics analysis.
Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.
ERIC Educational Resources Information Center
Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman
1998-01-01
Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…
Money Demand and Risk: A Classroom Experiment
ERIC Educational Resources Information Center
Ewing, Bradley T.; Kruse, Jamie B.; Thompson, Mark A.
2004-01-01
The authors describe a classroom experiment that motivates student understanding of behavior toward risk and its effect on money demand. In this experiment, students are endowed with an income stream that they can allocate between a risk-free fund and a risky fund. Changes in volatility are represented by mean-preserving changes in the variance of…
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.
2013-01-01
Background: Cluster-randomized experiments that assign intact groups such as schools or school districts to treatment conditions are increasingly common in educational research. Such experiments are inherently multilevel designs whose sensitivity (statistical power and precision of estimates) depends on the variance decomposition across levels.…
Graduate Social Work Education and Cognitive Complexity: Does Prior Experience Really Matter?
ERIC Educational Resources Information Center
Simmons, Chris
2014-01-01
This study examined the extent to which age, education, and practice experience among social work graduate students (N = 184) predicted cognitive complexity, an essential aspect of critical thinking. In the regression analysis, education accounted for more of the variance associated with cognitive complexity than age and practice experience. When…
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.
2013-01-01
Background: Cluster randomized experiments that assign intact groups such as schools or school districts to treatment conditions are increasingly common in educational research. Such experiments are inherently multilevel designs whose sensitivity (statistical power and precision of estimates) depends on the variance decomposition across levels.…
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
Cultural and temperamental variation in emotional response.
Tsai, Jeanne L; Levenson, Robert W; McCoy, Kimberly
2006-08-01
To examine the relative influence of cultural and temperamental factors on emotional response, we compared the emotional behavior, reports of emotional experience, and autonomic responses of 50 European American (EA) and 48 Chinese American (CA) college-age dating couples during conversations about conflicts in their relationships. EA couples showed more positive and less negative emotional behavior than did CA couples, despite similarities in reports of emotional experience and autonomic reactivity. Group differences in emotional behavior were mediated by cultural (values and practices) but not temperamental factors (neuroticism and extraversion). Collapsing across groups, cultural factors accounted for greater variance in emotional behavior but lesser variance in reports of emotional experience compared with temperamental factors. Together, these findings suggest that the relative influence of cultural and temperamental factors on emotion varies by response component. (c) 2006 APA, all rights reserved
DOE Office of Scientific and Technical Information (OSTI.GOV)
Öztürk, Hande; Noyan, I. Cevdet
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
Öztürk, Hande; Noyan, I. Cevdet
2017-08-24
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
The utility of the cropland data layer for Forest Inventory and Analysis
Greg C. Liknes; Mark D. Nelson; Dale D. Gormanson; Mark Hansen
2009-01-01
The Forest Service, U.S. Department of Agriculture's (USDA's) Northern Research Station Forest Inventory and Analysis program (NRS-FIA) uses digital land cover products derived from remotely sensed imagery, such as the National Land Cover Dataset (NLCD), for the purpose of variance reduction via postsampling stratification. The update cycle of the NLCD...
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie
2016-04-04
We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.
Hu, Jianhua; Wright, Fred A
2007-03-01
The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.
Genetic and environmental variance in content dimensions of the MMPI.
Rose, R J
1988-08-01
To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.
NASA Astrophysics Data System (ADS)
Stanaway, D. J.; Flores, A. N.; Haggerty, R.; Benner, S. G.; Feris, K. P.
2011-12-01
Concurrent assessment of biogeochemical and solute transport data (i.e. advection, dispersion, transient storage) within lotic systems remains a challenge in eco-hydrological research. Recently, the Resazurin-Resorufin Smart Tracer System (RRST) was proposed as a mechanism to measure microbial activity at the sediment-water interface [Haggerty et al., 2008, 2009] associating metabolic and hydrologic processes and allowing for the reach scale extrapolation of biotic function in the context of a dynamic physical environment. This study presents a Markov Chain Monte Carlo (MCMC) data assimilation technique to solve the inverse model of the Raz Rru Advection Dispersion Equation (RRADE). The RRADE is a suite of dependent 1-D reactive ADEs, associated through the microbially mediated reduction of Raz to Rru (k12). This reduction is proportional to DO consumption (R^2=0.928). MCMC is a suite of algorithms that solve Bayes theorem to condition uncertain model states and parameters on imperfect observations. Here, the RRST is employed to quantify the effect of chronic metal exposure on hyporheic microbial metabolism along a 100+ year old metal contamination gradient in the Clark Fork River (CF). We hypothesized that 1) the energetic cost of metal tolerance limits heterotrophic microbial respiration in communities evolved in chronic metal contaminated environments, with respiration inhibition directly correlated to degree of contamination (observational experiment) and 2) when experiencing acute metal stress, respiration rate inhibition of metal tolerant communities is less than that of naïve communities (manipulative experiment). To test these hypotheses, 4 replicate columns containing sediment collected from differently contaminated CF reaches and reference sites were fed a solution of RRST, NaCl, and cadmium (manipulative experiment only) within 24 hrs post collection. Column effluent was collected and measured for Raz, Rru, and EC to determine the Raz Rru breakthrough curves (BTC), subsequently modeled by the RRADE and thereby allowing derivation of in situ rates of metabolism. RRADE parameter values are estimated through Metropolis Hastings MCMC optimization. Unknown prior parameter distributions (PD) were constrained via a sensitivity analysis, except for the empirically estimated velocity. MCMC simulations were initiated at random points within the PD. Convergence of target distributions (TD) is achieved when the variance of the mode values of the six RRADE parameters in independent model replication is at least 10^{-3} less than the mode value. Convergence of k12, the parameter of interest, was more resolved, with modal variance of replicate simulations ranging from 10^{-4} less than the modal value to 0. The MCMC algorithm presented here offers a robust approach to solve the inverse RRST model and could be easily adapted to other inverse problems.
Southwestern USA Drought over Multiple Millennia
NASA Astrophysics Data System (ADS)
Salzer, M. W.; Kipfmueller, K. F.
2014-12-01
Severe to extreme drought conditions currently exist across much of the American West. There is increasing concern that climate change may be worsening droughts in the West and particularly the Southwest. Thus, it is important to understand the role of natural variability and to place current conditions in a long-term context. We present a tree-ring derived reconstruction of regional-scale precipitation for the Southwestern USA over several millennia. A network of 48 tree-ring chronologies from California, Nevada, Utah, Arizona, New Mexico, and Colorado was used. All of the chronologies are at least 1,000 years long. The network was subjected to data reduction through PCA and a "nested" multiple linear regression reconstruction approach. The regression model was able to capture 72% of the variance in September-August precipitation over the last 1,000 years and 53% of the variance over the first millennium of the Common Era. Variance captured and spatial coverage further declined back in time as the shorter chronologies dropped out of the model, eventually reaching 24% of variance captured at 3250 BC. Results show regional droughts on decadal- to multi-decadal scales have been prominent and persistent phenomena in the region over the last several millennia. Anthropogenic warming is likely to exacerbate the effects of future droughts on human and other biotic populations.
The magnitude and colour of noise in genetic negative feedback systems
Voliotis, Margaritis; Bowsher, Clive G.
2012-01-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772
The Effect of Carbonaceous Reductant Selection on Chromite Pre-reduction
NASA Astrophysics Data System (ADS)
Kleynhans, E. L. J.; Beukes, J. P.; Van Zyl, P. G.; Bunt, J. R.; Nkosi, N. S. B.; Venter, M.
2017-04-01
Ferrochrome (FeCr) production is an energy-intensive process. Currently, the pelletized chromite pre-reduction process, also referred to as solid-state reduction of chromite, is most likely the FeCr production process with the lowest specific electricity consumption, i.e., MWh/t FeCr produced. In this study, the effects of carbonaceous reductant selection on chromite pre-reduction and cured pellet strength were investigated. Multiple linear regression analysis was employed to evaluate the effect of reductant characteristics on the aforementioned two parameters. This yielded mathematical solutions that can be used by FeCr producers to select reductants more optimally in future. Additionally, the results indicated that hydrogen (H)- (24 pct) and volatile content (45.8 pct) were the most significant contributors for predicting variance in pre-reduction and compressive strength, respectively. The role of H within this context is postulated to be linked to the ability of a reductant to release H that can induce reduction. Therefore, contrary to the current operational selection criteria, the authors believe that thermally untreated reductants ( e.g., anthracite, as opposed to coke or char), with volatile contents close to the currently applied specification (to ensure pellet strength), would be optimal, since it would maximize H content that would enhance pre-reduction.
Improvements in Neck and Arm Pain Following an Anterior Cervical Discectomy and Fusion.
Massel, Dustin H; Mayo, Benjamin C; Bohl, Daniel D; Narain, Ankur S; Hijji, Fady Y; Fineberg, Steven J; Louie, Philip K; Basques, Bryce A; Long, William W; Modi, Krishna D; Singh, Kern
2017-07-15
A retrospective analysis. The aim of this study was to quantify improvements in Visual Analogue Scale (VAS) neck and arm pain, Neck Disability Index (NDI), and Short Form-12 (SF-12) Mental (MCS) and Physical (PCS) Composite scores following an anterior cervical discectomy and fusion (ACDF). ACDF is evaluated with patient-reported outcomes. However, the extent to which these outcomes improve following ACDF remains poorly defined. A surgical registry of patients who underwent primary, one- or two-level ACDF during 2013 to 2015 was reviewed. Comparisons of VAS neck and arm, NDI, and SF-12 MCS and PCS scores were performed using paired t tests from preoperative to each postoperative time point. Analysis of variance (ANOVA) was used to estimate the reduction in neck and arm pain over the first postoperative year. Subgroup analyses were performed for patients with predominant neck (pNP) or arm (pAP) pain, as well as for one- versus two-level ACDF. Eighty-nine patients were identified. VAS neck and arm, NDI, and SF-12 PCS improved from preoperative scores at all postoperative time points (P < 0.05 for each). Across the first postoperative year, patients reported a 2.7-point (44.2%) reduction in neck and a 3.1-point (54.0%) reduction in arm pain (P < 0.05 for each). Sixty-one patients with pNP and 28 patients with pAP reported reductions in neck and arm pain over the first 6 months and 12 weeks postoperatively, respectively (P < 0.05 for each). Patients who underwent one-level ACDFs experienced a 47.2% reduction in neck pain and 55.1% reduction in arm pain over the first postoperative year (P < 0.05 for each), while those undergoing two-level ACDF experienced 39.7% and 49.2% for neck and arm, respectively (P < 0.05 for each). This study suggests that patients experience significant improvements in neck and arm pain following ACDF regardless of presenting symptom. In addition, patients undergoing one-level ACDF report greater reductions in neck and arm pain than patients undergoing two-level fusion. 4.
A multispecies tree ring reconstruction of Potomac River streamflow (950-2001)
NASA Astrophysics Data System (ADS)
Maxwell, R. Stockton; Hessl, Amy E.; Cook, Edward R.; Pederson, Neil
2011-05-01
Mean May-September Potomac River streamflow was reconstructed from 950-2001 using a network of tree ring chronologies (n = 27) representing multiple species. We chose a nested principal components reconstruction method to maximize use of available chronologies backward in time. Explained variance during the period of calibration ranged from 20% to 53% depending on the number and species of chronologies available in each 25 year time step. The model was verified by two goodness of fit tests, the coefficient of efficiency (CE) and the reduction of error statistic (RE). The RE and CE never fell below zero, suggesting the model had explanatory power over the entire period of reconstruction. Beta weights indicated a loss of explained variance during the 1550-1700 period that we hypothesize was caused by the reduction in total number of predictor chronologies and loss of important predictor species. Thus, the reconstruction is strongest from 1700-2001. Frequency, intensity, and duration of drought and pluvial events were examined to aid water resource managers. We found that the instrumental period did not represent adequately the full range of annual to multidecadal variability present in the reconstruction. Our reconstruction of mean May-September Potomac River streamflow was a significant improvement over the Cook and Jacoby (1983) reconstruction because it expanded the seasonal window, lengthened the record by 780 years, and better replicated the mean and variance of the instrumental record. By capitalizing on variable phenologies and tree growth responses to climate, multispecies reconstructions may provide significantly more information about past hydroclimate, especially in regions with low aridity and high tree species diversity.
Sauer-Zavala, Shannon; Boswell, James F; Gallagher, Matthew W; Bentley, Kate H; Ametaj, Amantia; Barlow, David H
2012-09-01
The present study aimed to understand the contributions of both the trait tendency to experience negative emotions and how one relates to such experience in predicting symptom change during participation in the Unified Protocol (UP), a transdiagnostic treatment for emotional disorders. Data were derived from a randomized controlled trial comparing the UP to a waitlist control/delayed-treatment condition. First, effect sizes of pre- to post-treatment change for frequency of negative emotions and several variables measuring reactivity to emotional experience (emotional awareness and acceptance, fear of emotions, and anxiety sensitivity) were examined. Second, the relative contributions of change in negative emotions and emotional reactivity in predicting symptom (clinician-rated anxiety, depression, and severity of principal diagnosis) reductions were investigated. Results suggested that decreases in the frequency of negative emotions and reactivity to emotions following participation in the UP were both large in magnitude. Further, two emotional reactivity variables (fear of emotions and anxiety sensitivity) remained significantly related to symptom outcomes when controlling for negative emotions, and accounted for significant incremental variance in their prediction. These findings lend support to the notion that psychological health depends less on the frequency of negative emotions and more on how one relates to these emotions when they occur. Copyright © 2012 Elsevier Ltd. All rights reserved.
Analysis of variance calculations for irregular experiments
Jonathan W. Wright
1977-01-01
Irregular experiments may be more useful than much smaller regular experiments and can be analyzed statistically without undue expenditure of time. For a few missing plots, standard methods of calculating missing-plot values can be used. For more missing plots (up to 10 percent), seedlot means or randomly chosen plot means of the same seedlot can be substituted for...
The Dynamics of Visual Experience, an EEG Study of Subjective Pattern Formation
Elliott, Mark A.; Twomey, Deirdre; Glennon, Mark
2012-01-01
Background Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. Methodology/Principal Findings Using independent-component analysis (ICA) we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG). The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns) or a series of high-frequency harmonics of a delta oscillation (spiral patterns). Conclusions/Significance Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation. PMID:22292053
Chen, Xi; Kopsaftopoulos, Fotis; Wu, Qi; Ren, He; Chang, Fu-Kuo
2018-04-29
In this work, a data-driven approach for identifying the flight state of a self-sensing wing structure with an embedded multi-functional sensing network is proposed. The flight state is characterized by the structural vibration signals recorded from a series of wind tunnel experiments under varying angles of attack and airspeeds. A large feature pool is created by extracting potential features from the signals covering the time domain, the frequency domain as well as the information domain. Special emphasis is given to feature selection in which a novel filter method is developed based on the combination of a modified distance evaluation algorithm and a variance inflation factor. Machine learning algorithms are then employed to establish the mapping relationship from the feature space to the practical state space. Results from two case studies demonstrate the high identification accuracy and the effectiveness of the model complexity reduction via the proposed method, thus providing new perspectives of self-awareness towards the next generation of intelligent air vehicles.
Chailler, Myrianne; Ellis, Jacqueline; Stolarik, Anne; Woodend, Kirsten
2010-01-01
Coughing has been identified as the most painful experience post cardiac surgery. Participants (n = 32), in a randomized crossover trial, applied a frozen gel pack to their sternal incision dressing before performing deep breathing and coughing (DB & C) exercises. Pain scores from 0 to 10 at rest were compared with pain scores post DB & C with and without the gel pack. Participants were also asked to describe their sensations with the frozen gel pack, as well as their preferences for gel pack application. The repeated measures analysis of variance revealed a significant reduction in pain scores between pre- and post-application of the gel pack (F = 28.69, p < .001). There were 22 (69%) participants who preferred the application of the gel pack compared with no gel pack. All 32 (100%) participants would reapply the gel pack in the future. This study demonstrates that cold therapy can be used to manage sternal incisional pain when DB & C.
Misleading first impressions: different for different facial images of the same person.
Todorov, Alexander; Porter, Jenny M
2014-07-01
Studies on first impressions from facial appearance have rapidly proliferated in the past decade. Almost all of these studies have relied on a single face image per target individual, and differences in impressions have been interpreted as originating in stable physiognomic differences between individuals. Here we show that images of the same individual can lead to different impressions, with within-individual image variance comparable to or exceeding between-individuals variance for a variety of social judgments (Experiment 1). We further show that preferences for images shift as a function of the context (e.g., selecting an image for online dating vs. a political campaign; Experiment 2), that preferences are predictably biased by the selection of the images (e.g., an image fitting a political campaign vs. a randomly selected image; Experiment 3), and that these biases are evident after extremely brief (40-ms) presentation of the images (Experiment 4). We discuss the implications of these findings for studies on the accuracy of first impressions. © The Author(s) 2014.
Ashmore, Jamile A; Friedman, Kelli E; Reichmann, Simona K; Musante, Gerard J
2008-04-01
To evaluate the associations between weight-based stigmatization, psychological distress, and binge eating behavior in a treatment-seeking obese sample. Ninety-three obese adults completed three questionnaires: 1) Stigmatizing Situations Inventory, 2) Brief Symptoms Inventory, and 3) Binge Eating Questionnaire. Correlational analyses were used to evaluate the association between stigmatizing experiences, psychological distress and binge eating behavior. Stigmatizing experiences predicted both binge eating behavior (R(2)=.20, p<.001) and overall psychological distress (R(2)=.18, p<.001). A substantial amount of the variance in binge eating predicted by weight-based stigmatization was due to the effect of psychological distress. Specifically, of the 20% of the variance in binge eating accounted for by stigmatizing experiences, between 7% and 34% (p<.01) was due to the effects of various indicators of psychological distress. These data suggest that weight-based stigmatization predicts binge eating behavior and that psychological distress associated with stigmatizing experiences may be an important mediating factor.
Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan
2015-01-01
Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance explains the insignificant narrow-sense and significant broad-sense heritability by using a combination of careful statistical epistatic analyses and functional genetic experiments.
Bed load transport over a broad range of timescales: Determination of three regimes of fluctuations
NASA Astrophysics Data System (ADS)
Ma, Hongbo; Heyman, Joris; Fu, Xudong; Mettra, Francois; Ancey, Christophe; Parker, Gary
2014-12-01
This paper describes the relationship between the statistics of bed load transport flux and the timescale over which it is sampled. A stochastic formulation is developed for the probability distribution function of bed load transport flux, based on the Ancey et al. (2008) theory. An analytical solution for the variance of bed load transport flux over differing sampling timescales is presented. The solution demonstrates that the timescale dependence of the variance of bed load transport flux reduces to a three-regime relation demarcated by an intermittency timescale (tI) and a memory timescale (tc). As the sampling timescale increases, this variance passes through an intermittent stage (≪tI), an invariant stage (tI < t < tc), and a memoryless stage (≫ tc). We propose a dimensionless number (Ra) to represent the relative strength of fluctuation, which provides a common ground for comparison of fluctuation strength among different experiments, as well as different sampling timescales for each experiment. Our analysis indicates that correlated motion and the discrete nature of bed load particles are responsible for this three-regime behavior. We use the data from three experiments with high temporal resolution of bed load transport flux to validate the proposed three-regime behavior. The theoretical solution for the variance agrees well with all three sets of experimental data. Our findings contribute to the understanding of the observed fluctuations of bed load transport flux over monosize/multiple-size grain beds, to the characterization of an inherent connection between short-term measurements and long-term statistics, and to the design of appropriate sampling strategies for bed load transport flux.
Terry, Douglas P; Puente, Antonio N; Brown, Courtney L; Faraco, Carlos C; Miller, L Stephen
2013-01-01
The personality traits Openness to experience and Neuroticism of the five-factor model have previously been associated with memory performance in nondemented older adults, but this relationship has not been investigated in samples with memory impairment. Our examination of 50 community-dwelling older adults (29 cognitively intact; 21 with questionable dementia as determined by the Clinical Dementia Rating Scale) showed that demographic variables (age, years of education, gender, and estimated premorbid IQ) and current depressive symptoms explained a significant amount of variance of Repeatable Battery of Neuropsychological Status Delayed Memory (adjusted R (2) = 0.23). After controlling for these variables, a measure of global cognitive status further explained a significant portion of variance in memory performance (ΔR(2) = 0.13; adjusted R(2) = 0.36; p < .01). Finally, adding Openness to this hierarchical linear regression model explained a significant additional portion of variance (ΔR(2) = 0.08; adjusted R(2) = 0.44; p < .01) but adding Neuroticism did not explain any additional variance. This significant relationship between Openness and better memory performance above and beyond one's cognitive status and demographic variables may suggest that a lifelong pattern of involvement in new cognitive activities could be preserved in old age or protect from memory decline. This study suggests that personality may be a powerful predictor of memory ability and clinically useful in this heterogeneous population.
The ties that bind what is known to the recall of what is new.
Nelson, D L; Zhang, N
2000-12-01
Cued recall success varies with what people know and with what they do during an episode. This paper focuses on prior knowledge and disentangles the relative effects of 10 features of words and their relationships on cued recall. Results are reported for correlational and multiple regression analyses of data obtained from free association norms and from 29 experiments. The 10 features were only weakly correlated with each other in the norms and, with notable exceptions, in the experiments. The regression analysis indicated that forward cue-to-target strength explained the most variance, followed by backward target-to-cue strength. Target connectivity and set size explained the next most variance, along with mediated cue-to-target strength. Finally, frequency, concreteness, shared associate strength, and cue set size also contributed significantly to recall. Taken together, indices of prior word knowledge explain 49% of the recall variance. Theoretically driven equations that use free association to predict cued recall were also evaluated. Each equation was designed to condense multiple indices of word interconnectivity into a single predictor.
Automatic segmentation of colon glands using object-graphs.
Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk
2010-02-01
Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.
NASA Astrophysics Data System (ADS)
Tan, Zhenkun; Ke, Xizheng
2017-10-01
The variance of angle-of-arrival fluctuation of the partially coherent Gaussian-Schell Model (GSM) beam propagations in the slant path, based on the extended Huygens-Fresnel principle and the model of atmospheric refraction index structural constant proposed by the international telecommunication union-radio (ITU-R), has been investigated under the modified Hill turbulence model. The expression of that has been obtained. Firstly, the effects of optical wavelength, the inner-and-outer scale of the turbulence and turbulence intensity on the variance of angle-of-arrival fluctuation have been analyzed by comparing with the partially coherent GSM beam and the completely coherent Gaussian beam. Secondly, the variance of angle-of-arrival fluctuation has been compared with the von Karman spectrum and the modified Hill spectrum under the partially coherent GSM beam. Finally, the effects of beam waist radius and partial coherence length on the variance of angle-of-arrival of the collimated (focused) beam have been analyzed under the modified Hill turbulence model. The results show that the influence of the variance of angle-of-arrival fluctuation for the inner scale effect is larger than that of the outer scale effect. The variance of angle-of-arrival fluctuation under the modified Hill spectrum is larger than that of the von Karman spectrum. The influence of the waist radius on the variance of angle-of-arrival for the collimated beam is less than focused the beam. This study will provide a necessary theoretical basis for the experiments of partially coherent GSM beam propagation through atmosphere turbulence.
CMB-S4 and the hemispherical variance anomaly
NASA Astrophysics Data System (ADS)
O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.
2017-09-01
Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.
Cluster Correspondence Analysis.
van de Velden, M; D'Enza, A Iodice; Palumbo, F
2017-03-01
A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Poissant, Dominique; Brissette, François
2015-11-01
This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is less serious.
The MCNP-DSP code for calculations of time and frequency analysis parameters for subcritical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentine, T.E.; Mihalczo, J.T.
1995-12-31
This paper describes a modified version of the MCNP code, the MCNP-DSP. Variance reduction features were disabled to have strictly analog particle tracking in order to follow fluctuating processes more accurately. Some of the neutron and photon physics routines were modified to better represent the production of particles. Other modifications are discussed.
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.; Henson, Robert
2012-01-01
A measure of "clusterability" serves as the basis of a new methodology designed to preserve cluster structure in a reduced dimensional space. Similar to principal component analysis, which finds the direction of maximal variance in multivariate space, principal cluster axes find the direction of maximum clusterability in multivariate space.…
Four decades of implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B.
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Casemix classification payment for sub-acute and non-acute inpatient care, Thailand.
Khiaocharoen, Orathai; Pannarunothai, Supasit; Zungsontiporn, Chairoj; Riewpaiboon, Wachara
2010-07-01
There is a need to develop other casemix classifications, apart from DRG for sub-acute and non-acute inpatient care payment mechanism in Thailand. To develop a casemix classification for sub-acute and non-acute inpatient service. The study began with developing a classification system, analyzing cost, assigning payment weights, and ended with testing the validity of this new casemix system. Coefficient of variation, reduction in variance, linear regression, and split-half cross-validation were employed. The casemix for sub-acute and non-acute inpatient services contained 98 groups. Two percent of them had a coefficient of variation of the cost of higher than 1.5. The reduction in variance of cost after the classification was 32%. Two classification variables (physical function and the rehabilitation impairment categories) were key determinants of the cost (adjusted R2 = 0.749, p = .001). Validity results of split-half cross-validation of sub-acute and non-acute inpatient service were high. The present study indicated that the casemix for sub-acute and non-acute inpatient services closely predicted the hospital resource use and should be further developed for payment of the inpatients sub-acute and non-acute phase.
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Variance-reduction normalization technique for a compton camera system
NASA Astrophysics Data System (ADS)
Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.
2011-01-01
For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.
A VLBI variance-covariance analysis interactive computer program. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bock, Y.
1980-01-01
An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.
Holocene constraints on simulated tropical Pacific climate
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Cobb, K. M.; Carre, M.; Braconnot, P.; Leloup, J.; Zhou, Y.; Harrison, S. P.; Correge, T.; Mcgregor, H. V.; Collins, M.; Driscoll, R.; Elliot, M.; Schneider, B.; Tudhope, A. W.
2015-12-01
The El Niño-Southern Oscillation (ENSO) influences climate and weather worldwide, so uncertainties in its response to external forcings contribute to the spread in global climate projections. Theoretical and modeling studies have argued that such forcings may affect ENSO either via the seasonal cycle, the mean state, or extratropical influences, but these mechanisms are poorly constrained by the short instrumental record. Here we synthesize a pan-Pacific network of high-resolution marine biocarbonates spanning discrete snapshots of the Holocene (past 10, 000 years of Earth's history), which we use to constrain a set of global climate model (GCM) simulations via a forward model and a consistent treatment of uncertainty. Observations suggest important reductions in ENSO variability throughout the interval, most consistently during 3-5 kyBP, when approximately 2/3 reductions are inferred. The magnitude and timing of these ENSO variance reductions bear little resemblance to those sim- ulated by GCMs, or to equatorial insolation. The central Pacific witnessed a mid-Holocene increase in seasonality, at odds with the reductions simulated by GCMs. Finally, while GCM aggregate behavior shows a clear inverse relationship between seasonal amplitude and ENSO-band variance in sea-surface temperature, in agreement with many previous studies, such a relationship is not borne out by these observations. Our synthesis suggests that tropical Pacific climate is highly variable, but exhibited millennia-long periods of reduced ENSO variability whose origins, whether forced or unforced, contradict existing explanations. It also points to deficiencies in the ability of current GCMs to simulate forced changes in the tropical Pacific seasonal cycle and its interaction with ENSO, highlighting a key area of growth for future modeling efforts.
Owoeye, Olatunde; Arinola, Ganiyu O
2017-11-02
Mercuric chloride is an environmental pollutant that affects the nervous systems of mammals. Oxidative damage is one of the mechanisms of its toxicity, and antioxidants should mitigate this effect. A vegetable with antioxidant activity is Launaea taraxacifolia, whose ethanolic extract (EELT) was investigated in this experiment to determine its effect against mercuric chloride (MC) intoxication in rat brain. Thirty male Wistar rats were randomly assigned into five groups (n = 6) as follows: control; propylene glycol; EELT (400 mg/kg bwt) for 19 days; MC (HgCl 2 ) (4 mg/bwt) for 5 days from day 15 of the experiment; EELT+ MC, EELT (400 mg/kg bwt) for 14 days + MC (4 mg/bwt) for 5 days from day 15 of the experiment. All treatments were administered orally by gastric gavage. Behavioral tests were conducted on the 20th day, and rats were euthanized the same day. Blood and brain tissue were examined with regard to microanatomical parameters. Data were analyzed using analysis of variance with statistical significance set at p < .05. MC induced significant (19%) reduction of thrombocytes, which was ameliorated by 57% (p < .05) by pretreatment with EELT when compared with the MC group. Behavioral results showed that MC elicited significant reduction in transitions, rearings, forelimb grip strength, and latency of geotaxis. Histologically, MC induced alterations in the microanatomy of cerebral cortex, dentate gyrus, cornu ammonis 3, and cerebellum of rats. Treatment with EELT prior to MC administration significantly reduced the effect of MC on the hematological, behavioral, and ameliorated histological alterations of the brain. These findings may be attributed partially to the antioxidant property of EELT, which demonstrated protective effects against MC-induced behavioral parameters and alteration of microanatomy of rats' cerebral cortex, hippocampus, and cerebellum. In conclusion, EELT may be a valuable agent for further investigation in the prevention of acute neuropathy caused by inorganic mercury intoxication.
Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki
2018-05-21
The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.
Improving lidar turbulence estimates for wind energy
NASA Astrophysics Data System (ADS)
Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.
2016-09-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.
Improving Lidar Turbulence Estimates for Wind Energy: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer; Clifton, Andrew; Churchfield, Matthew
2016-10-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
Wang, Yunyun; Liu, Ye; Deng, Xinli; Cong, Yulong; Jiang, Xingyu
2016-12-15
Although conventional enzyme-linked immunosorbent assays (ELISA) and related assays have been widely applied for the diagnosis of diseases, many of them suffer from large error variance for monitoring the concentration of targets over time, and insufficient limit of detection (LOD) for assaying dilute targets. We herein report a readout mode of ELISA based on the binding between peptidic β-sheet structure and Congo Red. The formation of peptidic β-sheet structure is triggered by alkaline phosphatase (ALP). For the detection of P-Selectin which is a crucial indicator for evaluating thrombus diseases in clinic, the 'β-sheet and Congo Red' mode significantly decreases both the error variance and the LOD (from 9.7ng/ml to 1.1 ng/ml) of detection, compared with commercial ELISA (an existing gold-standard method for detecting P-Selectin in clinic). Considering the wide range of ALP-based antibodies for immunoassays, such novel method could be applicable to the analysis of many types of targets. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluation of tomotherapy MVCT image enhancement program for tumor volume delineation
Martin, Spencer; Rodrigues, George; Chen, Quan; Pavamani, Simon; Read, Nancy; Ahmad, Belal; Hammond, J. Alex; Venkatesan, Varagur; Renaud, James
2011-01-01
The aims of this study were to investigate the variability between physicians in delineation of head and neck tumors on original tomotherapy megavoltage CT (MVCT) studies and corresponding software enhanced MVCT images, and to establish an optimal approach for evaluation of image improvement. Five physicians contoured the gross tumor volume (GTV) for three head and neck cancer patients on 34 original and enhanced MVCT studies. Variation between original and enhanced MVCT studies was quantified by DICE coefficient and the coefficient of variance. Based on volume of agreement between physicians, higher correlation in terms of average DICE coefficients was observed in GTV delineation for enhanced MVCT for patients 1, 2, and 3 by 15%, 3%, and 7%, respectively, while delineation variance among physicians was reduced using enhanced MVCT for 12 of 17 weekly image studies. Enhanced MVCT provides advantages in reduction of variance among physicians in delineation of the GTV. Agreement on contouring by the same physician on both original and enhanced MVCT was equally high. PACS numbers: 87.57.N‐, 87.57.np, 87.57.nt
Improving Lidar Turbulence Estimates for Wind Energy
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.; ...
2016-10-03
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Biologic plating of unstable distal radial fractures.
Kwak, Jae-Man; Jung, Gu-Hee
2018-04-14
Volar locking plating through the flexor carpi radialis is a well-established technique for treating unstable distal radial fractures, with few reported complications. In certain circumstances, including metaphyseal comminuted fractures, bridge plating through a pronator quadratus (PQ)-sparing approach may be required to preserve the soft tissue envelope. This study describes our prospective experience with bridge plating through indirect reduction. Thirty-three wrists (four 23A2, six 23A3, 15 23C1, and eight 23C2) underwent bridge plating through a PQ-sparing approach with indirect reduction from June 2006 to December 2010. Mean patient age was 56.8 years (range, 25-83 years), and the mean follow-up period was 47.5 months (range, 36-84 months). Changes in radiologic parameters (volar tilt, radial inclination, radial length, and ulnar variance) were analyzed, and functional results at final follow-up were evaluated by measuring the Modified Mayo Wrist Score (MMWS) and Modified Gartland-Werley Score (MGWS). All wrists achieved bone healing without significant complications after a single operation. At final follow-up, radial length was restored from an average of 3.7 mm to 11.0 mm, as were radial inclination, from 16.4° to 22.5°, and volar tilt, from - 9.1° to 5.5°. However, radial length was overcorrected in three wrists, and two experienced residual dorsal tilt. Excellent and good results on the MGWS were achieved in 30 wrists (90.9%). The average MMWS outcome was 92.6 (range, 75-100). Our experience with bridge plating was similar to that previously reported in the earlier publications. Compared with the conventional technique, bridge plating through a PQ-sparing approach may help in managing metaphyseal comminuted fractures of both cortices with a reduced radio-ulnar index.
Analysis of manual segmentation in paranasal CT images.
Tingelhoff, Kathrin; Eichhorn, Klaus W G; Wagner, Ingo; Kunkel, Maria E; Moral, Analia I; Rilk, Markus E; Wahl, Friedrich M; Bootz, Friedrich
2008-09-01
Manual segmentation is often used for evaluation of automatic or semi-automatic segmentation. The purpose of this paper is to describe the inter and intraindividual variability, the dubiety of manual segmentation as a gold standard and to find reasons for the discrepancy. We realized two experiments. In the first one ten ENT surgeons, ten medical students and one engineer outlined the right maxillary sinus and ethmoid sinuses manually on a standard CT dataset of a human head. In the second experiment two participants outlined maxillary sinus and ethmoid sinuses five times consecutively. Manual segmentation was accomplished with custom software using a line segmentation tool. The first experiment shows the interindividual variability of manual segmentation which is higher for ethmoidal sinuses than for maxillary sinuses. The variability can be caused by the level of experience, different interpretation of the CT data or different levels of accuracy. The second experiment shows intraindividual variability which is lower than interindividual variability. Most variances in both experiments appear during segmentation of ethmoidal sinuses and outlining hiatus semilunaris. Concerning the inter and intraindividual variances the segmentation result of one manual segmenter could not directly be used as gold standard for the evaluation of automatic segmentation algorithms.
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian
2015-09-01
Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.
Lee, Jounghee; Park, Sohyun
2016-04-01
The sodium content of meals provided at worksite cafeterias is greater than the sodium content of restaurant meals and home meals. The objective of this study was to assess the relationships between sodium-reduction practices, barriers, and perceptions among food service personnel. We implemented a cross-sectional study by collecting data on perceptions, practices, barriers, and needs regarding sodium-reduced meals at 17 worksite cafeterias in South Korea. We implemented Chi-square tests and analysis of variance for statistical analysis. For post hoc testing, we used Bonferroni tests; when variances were unequal, we used Dunnett T3 tests. This study involved 104 individuals employed at the worksite cafeterias, comprised of 35 men and 69 women. Most of the participants had relatively high levels of perception regarding the importance of sodium reduction (very important, 51.0%; moderately important, 27.9%). Sodium reduction practices were higher, but perceived barriers appeared to be lower in participants with high-level perception of sodium-reduced meal provision. The results of the needs assessment revealed that the participants wanted to have more active education programs targeting the general population. The biggest barriers to providing sodium-reduced meals were use of processed foods and limited methods of sodium-reduced cooking in worksite cafeterias. To make the provision of sodium-reduced meals at worksite cafeterias more successful and sustainable, we suggest implementing more active education programs targeting the general population, developing sodium-reduced cooking methods, and developing sodium-reduced processed foods.
Ivezić, Slađana Štrkalj; Sesar, Marijan Alfonso; Mužinić, Lana
2017-03-01
Self-stigma adversely affects recovery from schizophrenia. Analyses of self stigma reduction programs discovered that few studies have investigated the impact of education about the illness on self-stigma reduction. The objective of this study was to determine whether psychoeducation based on the principles of recovery and empowerment using therapeutic group factors assists in reduction of self-stigma, increased empowerment and reduced perception of discrimination in patients with schizophrenia. 40 patients participated in psychoeducation group program and were compared with a control group of 40 patients placed on the waiting list for the same program. A Solomon four group design was used to control the influence of the pretest. Rating scales were used to measure internalized stigma, empowerment and perception of discrimination. Two-way analysis of variance was used to determine the main effects and interaction between the treatment and pretest. Simple analysis of variance with repeated measures was used to additionally test effect of treatment onself-stigma, empowerment and perceived discrimination. The participants in the psychoeducation group had lower scores on internalized stigma (F(1,76)=8.18; p<0.01) than the patients treated as usual. Analysis also confirmed the same effect with comparing experimental group before and after psychoeducation (F(1,19)=5.52; p<0.05). All participants showed a positive trend for empowerment. Psychoeducation did not influence perception of discrimination. Group psychoeducation decreased the level of self stigma. This intervention can assist in recovery from schizophrenia.
Method for simulating dose reduction in digital mammography using the Anscombe transformation.
Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C
2016-06-01
This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Lawrence, Roland W.
2000-01-01
As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.
The Stanford Prison Experiment in Introductory Psychology Textbooks: A Content Analysis
ERIC Educational Resources Information Center
Bartels, Jared M.
2015-01-01
The present content analysis examines the coverage of theoretical and methodological problems with the Stanford prison experiment (SPE) in a sample of introductory psychology textbooks. Categories included the interpretation and replication of the study, variance in guard behavior, participant selection bias, the presence of demand characteristics…
2010-02-01
Findings also highlighl the impact of homefront and poSl-deploymentlife events in addition to war -zone stress exposures, and emphasize the imponance of...additional 20% of the variance; Wlr-7.0ne stTessors and perceived war -zone threat together contributed an additional 19% of the variance; and homefront ...in the types of noncombat (i.e., post battle) war -zone events experienced by the two groups. Homefront concerns experienced during deployment were
Estimates of tropical analysis differences in daily values produced by two operational centers
NASA Technical Reports Server (NTRS)
Kasahara, Akira; Mizzi, Arthur P.
1992-01-01
To assess the uncertainty of daily synoptic analyses for the atmospheric state, the intercomparison of three First GARP Global Experiment level IIIb datasets is performed. Daily values of divergence, vorticity, temperature, static stability, vertical motion, mixing ratio, and diagnosed diabatic heating rate are compared for the period of 26 January-11 February 1979. The spatial variance and mean, temporal mean and variance, 2D wavenumber power spectrum, anomaly correlation, and normalized square difference are employed for comparison.
Variance fluctuations in nonstationary time series: a comparative study of music genres
NASA Astrophysics Data System (ADS)
Jennings, Heather D.; Ivanov, Plamen Ch.; De Martins, Allan M.; da Silva, P. C.; Viswanathan, G. M.
2004-05-01
An important problem in physics concerns the analysis of audio time series generated by transduced acoustic phenomena. Here, we develop a new method to quantify the scaling properties of the local variance of nonstationary time series. We apply this technique to analyze audio signals obtained from selected genres of music. We find quantitative differences in the correlation properties of high art music, popular music, and dance music. We discuss the relevance of these objective findings in relation to the subjective experience of music.
Helicopter Control Energy Reduction Using Moving Horizontal Tail
Oktay, Tugrul; Sal, Firat
2015-01-01
Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
NASA Astrophysics Data System (ADS)
Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.
2015-07-01
We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.
Methods for Improving Information from ’Undesigned’ Human Factors Experiments.
Human factors engineering, Information processing, Regression analysis , Experimental design, Least squares method, Analysis of variance, Correlation techniques, Matrices(Mathematics), Multiple disciplines, Mathematical prediction
NASA Astrophysics Data System (ADS)
Ardhi, Muh. Waskito; Sulistyarsi, Ani; Pujiati
2017-06-01
Aspergillus sp is a microorganism which has a high ability to produce cellulase enzymes. In producing Cellulase enzymes requires appropriate concentration and incubation time to obtain optimum enzyme activity. This study aimed to determine the effect of inoculum concentration and incubation time towards production and activity of cellulases from Aspergillus sp substrate bagasse. This research used experiments method; completely randomized design with 2 factorial repeated 2 times. The treatment study include differences inoculum (K) 5% (K1), 15% (K2) 25%, (K3) and incubation time (F) that is 3 days (F1), 6 days (F2), 9 days (F3), 12 days (F4). The data taken from the treatment are glucose reduction and protein levels of crude cellulase enzyme activity that use Nelson Somogyi and Biuret methods. Analysis of variance ANOVA data used two paths with significance level of 5% then continued with LSD test. The results showed that: Fhit>Ftab. Thus, there is effect of inoculum concentrations and incubation time toward activity of crude cellulases of Aspergillus sp. The highest glucose reduction of treatment is K3F4 (concentration of inoculum is 25% with 12 days incubation time) amount 12.834 g / ml and the highest protein content is K3F4 (concentration of inoculum is 25% with with 12 days incubation time) amount 0.740 g / ml.
NASA Astrophysics Data System (ADS)
Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu
2014-03-01
Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.
Lalonde, Kaylah; Holt, Rachael Frush
2017-01-01
Purpose This preliminary investigation explored potential cognitive and linguistic sources of variance in 2-year-olds’ speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method Twenty typically developing 2-year-olds completed the newly modified toddler change/no-change procedure. Behavioral tests and parent report questionnaires were used to measure several cognitive and linguistic constructs. Stepwise linear regression was used to relate discrimination sensitivity to the cognitive and linguistic measures. In addition, discrimination results from the current experiment were compared with those from 2-year-old children tested in a previous experiment. Results Receptive vocabulary and working memory explained 56.6% of variance in discrimination performance. Performance was not different on the modified toddler change/no-change procedure used in the current experiment from in a previous investigation, which used the original version of the procedure. Conclusions The relationship between speech discrimination and receptive vocabulary and working memory provides further evidence that the procedure is sensitive to the strength of perceptual representations. The role for working memory might also suggest that there are specific subject-related, nonsensory factors limiting the applicability of the procedure to children who have not reached the necessary levels of cognitive and linguistic development. PMID:24023371
Lalonde, Kaylah; Holt, Rachael Frush
2014-02-01
This preliminary investigation explored potential cognitive and linguistic sources of variance in 2-year-olds’ speech-sound discrimination by using the toddler change/ no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Twenty typically developing 2-year-olds completed the newly modified toddler change/no-change procedure. Behavioral tests and parent report questionnaires were used to measure several cognitive and linguistic constructs. Stepwise linear regression was used to relate discrimination sensitivity to the cognitive and linguistic measures. In addition, discrimination results from the current experiment were compared with those from 2-year-old children tested in a previous experiment. Receptive vocabulary and working memory explained 56.6% of variance in discrimination performance. Performance was not different on the modified toddler change/no-change procedure used in the current experiment from in a previous investigation, which used the original version of the procedure. The relationship between speech discrimination and receptive vocabulary and working memory provides further evidence that the procedure is sensitive to the strength of perceptual representations. The role for working memory might also suggest that there are specific subject-related, nonsensory factors limiting the applicability of the procedure to children who have not reached the necessary levels of cognitive and linguistic development.
Decadal climate prediction in the large ensemble limit
NASA Astrophysics Data System (ADS)
Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.
2017-12-01
In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.
Choice in experiential learning: True preferences or experimental artifacts?
Ashby, Nathaniel J S; Konstantinidis, Emmanouil; Yechiam, Eldad
2017-03-01
The rate of selecting different options in the decisions-from-feedback paradigm is commonly used to measure preferences resulting from experiential learning. While convergence to a single option increases with experience, some variance in choice remains even when options are static and offer fixed rewards. Employing a decisions-from-feedback paradigm followed by a policy-setting task, we examined whether the observed variance in choice is driven by factors related to the paradigm itself: Continued exploration (e.g., believing options are non-stationary) or exploitation of perceived outcome patterns (i.e., a belief that sequential choices are not independent). Across two studies, participants showed variance in their choices, which was related (i.e., proportional) to the policies they set. In addition, in Study 2, participants' reported under-confidence was associated with the amount of choice variance in later choices and policies. These results suggest that variance in choice is better explained by participants lacking confidence in knowing which option is better, rather than methodological artifacts (i.e., exploration or failures to recognize outcome independence). As such, the current studies provide evidence for the decisions-from-feedback paradigm's validity as a behavioral research method for assessing learned preferences. Copyright © 2017 Elsevier B.V. All rights reserved.
Online and offline tools for head movement compensation in MEG.
Stolk, Arjen; Todorovic, Ana; Schoffelen, Jan-Mathijs; Oostenveld, Robert
2013-03-01
Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments. Copyright © 2012 Elsevier Inc. All rights reserved.
Development and validation of an instrument to assess job satisfaction in eye-care personnel.
Paudel, Prakash; Cronjé, Sonja; O'Connor, Patricia M; Khadka, Jyoti; Rao, Gullapalli N; Holden, Brien A
2017-11-01
The aim was to develop and validate an instrument to measure job satisfaction in eye-care personnel and assess the job satisfaction of one-year trained vision technicians in India. A pilot instrument for assessing job satisfaction was developed, based on a literature review and input from a public health expert panel. Rasch analysis was used to assess psychometric properties and to undertake an iterative item reduction. The instrument was then administered to vision technicians in vision centres of Andhra Pradesh in India. Associations between vision technicians' job satisfaction and factors such as age, gender and experience were analysed using t-test and one-way analysis of variance. Rasch analysis confirmed that the 15-item job satisfaction in eye-care personnel (JSEP) was a unidimensional instrument with good fit statistics, measurement precisions and absence of differential item functioning. Overall, vision technicians reported high rates of job satisfaction (0.46 logits). Age, gender and experience were not associated with high job satisfaction score. Item score analysis showed non-financial incentives, salary and workload were the most important determinants of job satisfaction. The 15-item JSEP instrument is a valid instrument for assessing job satisfaction among eye-care personnel. Overall, vision technicians in India demonstrated high rates of job satisfaction. © 2016 Optometry Australia.
Are judgments a form of data clustering? Reexamining contrast effects with the k-means algorithm.
Boillaud, Eric; Molina, Guylaine
2015-04-01
A number of theories have been proposed to explain in precise mathematical terms how statistical parameters and sequential properties of stimulus distributions affect category ratings. Various contextual factors such as the mean, the midrange, and the median of the stimuli; the stimulus range; the percentile rank of each stimulus; and the order of appearance have been assumed to influence judgmental contrast. A data clustering reinterpretation of judgmental relativity is offered wherein the influence of the initial choice of centroids on judgmental contrast involves 2 combined frequency and consistency tendencies. Accounts of the k-means algorithm are provided, showing good agreement with effects observed on multiple distribution shapes and with a variety of interaction effects relating to the number of stimuli, the number of response categories, and the method of skewing. Experiment 1 demonstrates that centroid initialization accounts for contrast effects obtained with stretched distributions. Experiment 2 demonstrates that the iterative convergence inherent to the k-means algorithm accounts for the contrast reduction observed across repeated blocks of trials. The concept of within-cluster variance minimization is discussed, as is the applicability of a backward k-means calculation method for inferring, from empirical data, the values of the centroids that would serve as a representation of the judgmental context. (c) 2015 APA, all rights reserved.
Wittorf, Andreas; Jakobi-Malterre, Ute E; Beulen, Silke; Bechdolf, Andreas; Müller, Bernhard W; Sartory, Gudrun; Wagner, Michael; Wiedemann, Georg; Wölwer, Wolfgang; Herrlich, Jutta; Klingberg, Stefan
2013-12-30
Despite the promising findings in relation to the efficacy of cognitive behavioral therapy for psychosis (CBTp), little attention has been paid to the therapy skills necessary to deliver CBTp and to the influence of such skills on processes underlying therapeutic change. Our study investigated the associations between general and technical therapy skills and patient experiences of change processes in CBTp. The study sample consisted of 79 patients with psychotic disorders who had undergone CBTp. We randomly selected one tape-recorded therapy session from each of the cases. General and technical therapy skills were assessed by the Cognitive Therapy Scale for Psychosis. The Bern Post Session Report for Patients was applied to measure patient experiences of general change processes in the sense of Grawe's psychological therapy. General skills, such as feedback and understanding, explained 23% of the variance of patients' self-esteem experience, but up to 10% of the variance of mastery, clarification, and contentment experiences. The technical skill of guided discovery consistently showed negative associations with patients' alliance, contentment, and control experiences. The study points to the importance of general therapy skills for patient experiences of change processes in CBTp. Some technical skills, however, could detrimentally affect the therapeutic relationship. © 2013 Elsevier Ireland Ltd. All rights reserved.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Least-squares dual characterization for ROI assessment in emission tomography
NASA Astrophysics Data System (ADS)
Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.
2013-06-01
Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.
Meta-analysis of the performance variation in broilers experimentally challenged by Eimeria spp.
Kipper, Marcos; Andretta, Ines; Lehnen, Cheila Roberta; Lovatto, Paulo Alberto; Monteiro, Silvia Gonzalez
2013-09-01
A meta-analysis was carried out to (1) study the relation of the variation in feed intake and weight gain in broilers infected with Eimeria acervulina, Eimeria maxima, Eimeria tenella, or a Pool of Eimeria species, and (2) to identify and to quantify the effects involved in the infection. A database of articles addressing the experimental infection with Coccidia in broilers was developed. These publications must present results of animal performance (weight gain, feed intake, and feed conversion ratio). The database was composed by 69 publications, totalling around 44 thousand animals. Meta-analysis followed three sequential analyses: graphical, correlation, and variance-covariance. The feed intake of the groups challenged by E. acervulina and E. tenella did not differ (P>0.05) to the control group. However, the feed intake in groups challenged by E. maxima and Pool showed an increase of 8% and 5% (P<0.05) in relation to the control group. Challenged groups presented a decrease (P<0.05) in weight gain compared with control groups. All challenged groups showed a reduction in weight gain, even when there was no reduction (P<0.05) in feed intake (adjustment through variance-covariance analysis). The feed intake variation in broilers infected with E. acervulina, E. maxima, E. tenella, or Pool showed a quadratic (P<0.05) influence over the variation in weight gain. In relation to the isolated effects, the challenges have an impact of less than 1% over the variance in feed intake and weight gain. However, the magnitude of the effects varied with Eimeria species, animal age, sex, and genetic line. In general the age effect is superior to the challenge effect, showing that age at the challenge is important to determine the impact of Eimeria infection. Copyright © 2013 Elsevier B.V. All rights reserved.
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Johnson, Seth R.; Remec, Igor
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less
Impact of an Adlerian Based Pretrial Diversion Program: Self Concept and Dissociation
ERIC Educational Resources Information Center
Norvell, Jeanell J.
2010-01-01
Clients' self concepts and dissociative experiences were examined to determine the impact of an Adlerian based pretrial diversion program. Clients completing the program displayed a significant change in self concepts and dissociative experiences. A repeated measures multivariate analysis of variance indicated a 35% change, made up of the…
ERIC Educational Resources Information Center
Deater-Deckard, Kirby
2016-01-01
Most of the individual difference variance in the population is found "within" families, yet studying the processes causing this variation is difficult due to confounds between genetic and nongenetic influences. Quasi-experiments can be used to test hypotheses regarding environment exposure (e.g., timing, duration) while controlling for…
An Evaluation of Psychophysical Models of Auditory Change Perception
ERIC Educational Resources Information Center
Micheyl, Christophe; Kaernbach, Christian; Demany, Laurent
2008-01-01
In many psychophysical experiments, the participant's task is to detect small changes along a given stimulus dimension or to identify the direction (e.g., upward vs. downward) of such changes. The results of these experiments are traditionally analyzed with a constant-variance Gaussian (CVG) model or a high-threshold (HT) model. Here, the authors…
Kamara, Eli; Robinson, Jonathon; Bas, Marcel A; Rodriguez, Jose A; Hepinstall, Matthew S
2017-01-01
Acetabulum positioning affects dislocation rates, component impingement, bearing surface wear rates, and need for revision surgery. Novel techniques purport to improve the accuracy and precision of acetabular component position, but may have a significant learning curve. Our aim was to assess whether adopting robotic or fluoroscopic techniques improve acetabulum positioning compared to manual total hip arthroplasty (THA) during the learning curve. Three types of THAs were compared in this retrospective cohort: (1) the first 100 fluoroscopically guided direct anterior THAs (fluoroscopic anterior [FA]) done by a surgeon learning the anterior approach, (2) the first 100 robotic-assisted posterior THAs done by a surgeon learning robotic-assisted surgery (robotic posterior [RP]), and (3) the last 100 manual posterior (MP) THAs done by each surgeon (200 THAs) before adoption of novel techniques. Component position was measured on plain radiographs. Radiographic measurements were taken by 2 blinded observers. The percentage of hips within the surgeons' "target zone" (inclination, 30°-50°; anteversion, 10°-30°) was calculated, along with the percentage within the "safe zone" of Lewinnek (inclination, 30°-50°; anteversion, 5°-25°) and Callanan (inclination, 30°-45°; anteversion, 5°-25°). Relative risk (RR) and absolute risk reduction (ARR) were calculated. Variances (square of the standard deviations) were used to describe the variability of cup position. Seventy-six percentage of MP THAs were within the surgeons' target zone compared with 84% of FA THAs and 97% of RP THAs. This difference was statistically significant, associated with a RR reduction of 87% (RR, 0.13 [0.04-0.40]; P < .01; ARR, 21%; number needed to treat, 5) for RP compared to MP THAs. Compared to FA THAs, RP THAs were associated with a RR reduction of 81% (RR, 0.19 [0.06-0.62]; P < .01; ARR, 13%; number needed to treat, 8). Variances were lower for acetabulum inclination and anteversion in RP THAs (14.0 and 19.5) as compared to the MP (37.5 and 56.3) and FA (24.5 and 54.6) groups. These differences were statistically significant (P < .01). Adoption of robotic techniques delivers significant and immediate improvement in the precision of acetabular component positioning during the learning curve. While fluoroscopy has been shown to be beneficial with experience, a learning curve exists before precision improves significantly. Copyright © 2016 Elsevier Inc. All rights reserved.
Levine, Martin; Owen, Willis L; Avery, Kevin T
2005-06-01
Fluoridated dentifrices reduce dental caries in subjects who perform effective oral hygiene. Actinomyces naeslundii increases in teeth-adherent microbial biofilms (plaques) in these subjects, and a well-characterized serum immunoglobulin G (IgG) antibody response (Actinomyces antibody [A-Ab]) is also increased. Other studies suggest that a serum IgG antibody response to streptococcal d-alanyl poly(glycerophosphate) (S-Ab) may indicate caries experience associated strongly with gingival health and exposure to fluoridated water. The aim of this study was to investigate relationships between A-Ab response, oral hygiene, S-Ab response, and caries experience. Measurements were made of A-Ab and S-Ab concentrations, caries experience (number of decayed, missing, and filled teeth [DMFT], number of teeth surfaces [DMFS], and number of decayed teeth needing treated [DT]), exposure to fluoridated water (Flu), mean clinical pocket depth (PD; in millimeters), and extent of plaque (PL) and gingival bleeding on probing (BOP). A-Ab concentration, the dependent variable in a multiple regression analysis, increased with S-Ab concentration and decreased with PL and DMFT adjusted for Flu (R(2) = 0.51, P < 0.002). Residual associations with age, DMFS, DT, and BOP were not significant. In addition, an elevated A-Ab response, defined from immunoprecipitation and immunoassay measurements, indicated a significant, 30% reduction in DMFT after adjustment for significant age and Flu covariance (analysis of variance with covariance F statistic = 10.6, P < 0.003; S-Ab response and interactions not significant). Thus, an elevated A-Ab response indicates less caries in subjects performing effective oral hygiene using fluoridated dentifrices. Conversely, a low A-Ab response is suggestive of decreased A. naeslundii binding to saliva-coated apatite and greater caries experience, as reported by others.
Mulder, Herman A.; Hill, William G.; Knol, Egbert F.
2015-01-01
There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of other traits, however. A genetic covariance between these is expected to lead to nonlinearity between them, for example between birth weight and survival of piglets, where animals of extreme weights have lower survival. The objectives were to derive this nonlinear relationship analytically using multiple regression and apply it to data on piglet birth weight and survival. This study provides a framework to study such nonlinear relationships caused by genetic covariance of environmental variance of one trait and the mean of the other. It is shown that positions of phenotypic and genetic optima may differ and that genetic relationships are likely to be more curvilinear than phenotypic relationships, dependent mainly on the environmental correlation between these traits. Genetic correlations may change if the population means change relative to the optimal phenotypes. Data of piglet birth weight and survival show that the presence of nonlinearity can be partly explained by the genetic covariance between environmental variance of birth weight and survival. The framework developed can be used to assess effects of artificial and natural selection on means and variances of traits and the statistical method presented can be used to estimate trade-offs between environmental variance of one trait and mean levels of others. PMID:25631318
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-09-21
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Predictability Experiments With the Navy Operational Global Atmospheric Prediction System
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.
2003-12-01
There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.
Yang, Binxia; Brahmbhatt, Akshaar; Nieves Torres, Evelyn; Thielen, Brian; McCall, Deborah L.; Engel, Sean; Bansal, Aditya; Pandey, Mukesh K.; Dietz, Allan B.; Leof, Edward B.; DeGrado, Timothy R.; Mukhopadhyay, Debabrata
2016-01-01
Purpose To determine if adventitial transplantation of human adipose tissue–derived mesenchymal stem cells (MSCs) to the outflow vein of B6.Cg-Foxn1nu/J mice with arteriovenous fistula (AVF) at the time of creation would reduce monocyte chemoattractant protein-1 (Mcp-1) gene expression and venous neointimal hyperplasia. The second aim was to track transplanted zirconium 89 (89Zr)–labeled MSCs serially with positron emission tomography (PET) for 21 days. Materials and Methods All animal experiments were performed according to protocols approved by the institutional animal care and use committee. Fifty B6.Cg-Foxn1nu/J mice were used to accomplish the study aims. Green fluorescent protein was used to stably label 2.5 × 105 MSCs, which were injected into the adventitia of the outflow vein at the time of AVF creation in the MSC group. Eleven mice died after AVF placement. Animals were sacrificed on day 7 after AVF placement for real-time polymerase chain reaction (n = 6 for MSC and control groups) and histomorphometric (n = 6 for MSC and control groups) analyses and on day 21 for histomorphometric analysis only (n = 6 for MSC and control groups). In a separate group of experiments (n = 3), animals with transplanted 89Zr-labeled MSCs were serially imaged with PET for 3 weeks. Multiple comparisons were performed with two-way analysis of variance, followed by the Student t test with post hoc Bonferroni correction. Results In vessels with transplanted MSCs compared with control vessels, there was a significant decrease in Mcp-1 gene expression (day 7: mean reduction, 62%; P = .029), with a significant increase in the mean lumen vessel area (day 7: mean increase, 176% [P = .013]; day 21: mean increase, 415% [P = .011]). Moreover, this was accompanied by a significant decrease in Ki-67 index (proliferation on day 7: mean reduction, 81% [P = .0003]; proliferation on day 21: mean reduction, 60%, [P = .016]). Prolonged retention of MSCs at the adventitia was evidenced by serial PET images of 89Zr-labeled cells. Conclusion Adventitial transplantation of MSCs decreases Mcp-1 gene expression, accompanied by a reduction in venous neointimal hyperplasia. © RSNA, 2015 Online supplemental material is available for this article. PMID:26583911
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1999-01-01
Drag reduction tests were conducted on the LASRE/X-33 flight experiment. The LASRE experiment is a flight test of a roughly 20% scale model of an X-33 forebody with a single aerospike engine at the rear. The experiment apparatus is mounted on top of an SR-71 aircraft. This paper suggests a method for reducing base drag by adding surface roughness along the forebody. Calculations show a potential for base drag reductions of 8-14%. Flight results corroborate the base drag reduction, with actual reductions of 15% in the high-subsonic flight regime. An unexpected result of this experiment is that drag benefits were shown to persist well into the supersonic flight regime. Flight results show no overall net drag reduction. Applied surface roughness causes forebody pressures to rise and offset base drag reductions. Apparently the grit displaced streamlines outward, causing forebody compression. Results of the LASRE drag experiments are inconclusive and more work is needed. Clearly, however, the forebody grit application works as a viable drag reduction tool.
Control algorithms for dynamic attenuators.
Hsieh, Scott S; Pelc, Norbert J
2014-06-01
The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.
Midlatitude atmosphere-ocean interaction during El Nino. Part I. The north Pacific ocean
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, M.A.
Atmosphere-ocean modeling experiments are used to investigate the formation of sea surface temperature (SST) anomalies in the North Pacific Ocean during fall and winter of the El Nino year. Experiments in which the NCAR Community Climate Model (CCM) surface fields are used to force a mixed-layer ocean model in the North Pacific (no air-sea feedback) are compared to simulations in which the CCM and North Pacific Ocean model are coupled. Anomalies in the atmosphere and the North Pacific Ocean during El Nino are obtained from the difference between simulations with and without prescribed warm SST anomalies in the tropical Pacific.more » In both the forced and coupled experiments, the anomaly pattern resembles a composite of the actual SST anomaly field during El Nino: warm SSTs develop along the coast of North America and cold SSTs form in the central Pacific. In the coupled simulations, air-sea interaction results in a 25% to 50% reduction in the magnitude of the SST and mixed-layer depth anomalies, resulting in more realistic SST fields. Coupling also decreases the SST anomaly variance; as a result, the anomaly centers remain statistically significant even though the magnitude of the anomalies is reduced. Three additional sensitivity studies indicate that air-sea feedback and entrainment act to damp SST anomalies while Ekman pumping has a negligible effect on mixed-layer depth and SST anomalies in midatitudes.« less
Neurocognitive correlates of helplessness, hopelessness, and well-being in schizophrenia.
Lysaker, P H; Clements, C A; Wright, D E; Evans, J; Marks, K A
2001-07-01
Persons with schizophrenia are widely recognized to experience potent feelings of hopelessness, helplessness, and a fragile sense of well-being. Although these subjective experiences have been linked to positive symptoms, little is known about their relationship to neurocognition. Accordingly, this study examined the relationship of self-reports of hope, self-efficacy, and well-being to measures of neurocognition, symptoms, and coping among 49 persons with schizophrenia or schizoaffective disorder. Results suggest that poorer executive function, verbal memory, and a greater reliance on escape avoidance as a coping mechanism predicted significantly higher levels of hope and well being with multiple regressions accounting for 34% and 20% of the variance (p < .0001), respectively. Self-efficacy predicted lower levels of positive symptoms and greater preference for escape avoidance as a coping mechanism with a multiple repression accounting for 9% of the variance (p < .05). Results may suggest that higher levels of neurocognitive impairment and an avoidant coping style may shield some with schizophrenia from painful subjective experiences. Theoretical and practical implications for rehabilitation are discussed.
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Meyer, P. J.
1984-01-01
Structure and correlation functions are used to describe atmospheric variability during the 10-11 April day of AVE-SESAME 1979 that coincided with the Red River Valley tornado outbreak. The special mesoscale rawinsonde data are employed in calculations involving temperature, geopotential height, horizontal wind speed and mixing ratio. Functional analyses are performed in both the lower and upper troposphere for the composite 24 h experiment period and at individual 3 h observation times. Results show that mesoscale features are prominent during the composite period. Fields of mixing ratio and horizontal wind speed exhibit the greatest amounts of small-scale variance, whereas temperature and geopotential height contain the least. Results for the nine individual times show that small-scale variance is greatest during the convective outbreak. The functions also are used to estimate random errors in the rawinsonde data. Finally, sensitivity analyses are presented to quantify confidence limits of the structure functions.
NASA Technical Reports Server (NTRS)
Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)
2004-01-01
Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Arjunan, Sridhar P; Kumar, Dinesh K; Bastos, Teodiano
2012-01-01
This study has investigated the effect of age on the fractal based complexity measure of muscle activity and variance in the force of isometric muscle contraction. Surface electromyogram (sEMG) and force of muscle contraction were recorded from 40 healthy subjects categorized into: Group 1: Young - age range 20-30; 10 Males and 10 Females, Group 2: Old - age range 55-70; 10 Males and 10 Females during isometric exercise at Maximum Voluntary contraction (MVC). The results show that there is a reduction in the complexity of surface electromyogram (sEMG) associated with aging. The results demonstrate that there is an increase in the coefficient of variance (CoV) of the force of muscle contraction and a decrease in complexity of sEMG for the Old age group when compared with the Young age group.
Koen, Joshua D.; Aly, Mariam; Wang, Wei-Chun; Yonelinas, Andrew P.
2013-01-01
A prominent finding in recognition memory is that studied items are associated with more variability in memory strength than new items. Here, we test three competing theories for why this occurs - the encoding variability, attention failure, and recollection accounts. Distinguishing amongst these theories is critical because each provides a fundamentally different account of the processes underlying recognition memory. The encoding variability and attention failure accounts propose that old item variance will be unaffected by retrieval manipulations because the processes producing this effect are ascribed to encoding. The recollection account predicts that both encoding and retrieval manipulations that preferentially affect recollection will affect memory variability. These contrasting predictions were tested by examining the effect of response speeding (Experiment 1), dividing attention at retrieval (Experiment 2), context reinstatement (Experiment 3), and increased test delay (Experiment 4) on recognition performance. The results of all four experiments confirmed the predictions of the recollection account, and were inconsistent with the encoding variability account. The evidence supporting the attention failure account was mixed, with two of the four experiments confirming the account and two disconfirming the account. These results indicate that encoding variability and attention failure are insufficient accounts of memory variance, and provide support for the recollection account. Several alternative theoretical accounts of the results are also considered. PMID:23834057
Logarithmic scaling for fluctuations of a scalar concentration in wall turbulence.
Mouri, Hideaki; Morinaga, Takeshi; Yagi, Toshimasa; Mori, Kazuyasu
2017-12-01
Within wall turbulence, there is a sublayer where the mean velocity and the variance of velocity fluctuations vary logarithmically with the height from the wall. This logarithmic scaling is also known for the mean concentration of a passive scalar. By using heat as such a scalar in a laboratory experiment of a turbulent boundary layer, the existence of the logarithmic scaling is shown here for the variance of fluctuations of the scalar concentration. It is reproduced by a model of energy-containing eddies that are attached to the wall.
Shinzato, Takashi
2016-12-01
The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-12-01
The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.
Smoski, Moria J.; Suarez, Edward C.; Brantley, Jeffrey G.; Ekblad, Andrew G.; Lynch, Thomas R.; Wolever, Ruth Quillian
2015-01-01
Abstract Objective: Mindfulness-based stress reduction (MBSR) is a secular meditation training program that reduces depressive symptoms. Little is known, however, about the degree to which a participant's spiritual and religious background, or other demographic characteristics associated with risk for depression, may affect the effectiveness of MBSR. Therefore, this study tested whether individual differences in religiosity, spirituality, motivation for spiritual growth, trait mindfulness, sex, and age affect MBSR effectiveness. Methods: As part of an open trial, multiple regression was used to analyze variation in depressive symptom outcomes among 322 adults who enrolled in an 8-week, community-based MBSR program. Results: As hypothesized, depressive symptom severity decreased significantly in the full study sample (d=0.57; p<0.01). After adjustment for baseline symptom severity, moderation analyses revealed no significant differences in the change in depressive symptoms following MBSR as a function of spirituality, religiosity, trait mindfulness, or demographic variables. Paired t tests found consistent, statistically significant (p<0.01) reductions in depressive symptoms across all subgroups by religious affiliation, intention for spiritual growth, sex, and baseline symptom severity. After adjustment for baseline symptom scores, age, sex, and religious affiliation, a significant proportion of variance in post-MBSR depressive symptoms was uniquely explained by changes in both spirituality (β=−0.15; p=0.006) and mindfulness (β=−0.17; p<0.001). Conclusions: These findings suggest that MBSR, a secular meditation training program, is associated with improved depressive symptoms regardless of affiliation with a religion, sense of spirituality, trait level of mindfulness before MBSR training, sex, or age. Increases in both mindfulness and daily spiritual experiences uniquely explained improvement in depressive symptoms. PMID:25695903
Harkness, Mark; Fisher, Angela; Lee, Michael D; Mack, E Erin; Payne, Jo Ann; Dworatzek, Sandra; Roberts, Jeff; Acheson, Carolyn; Herrmann, Ronald; Possolo, Antonio
2012-04-01
A large, multi-laboratory microcosm study was performed to select amendments for supporting reductive dechlorination of high levels of trichloroethylene (TCE) found at an industrial site in the United Kingdom (UK) containing dense non-aqueous phase liquid (DNAPL) TCE. The study was designed as a fractional factorial experiment involving 177 bottles distributed between four industrial laboratories and was used to assess the impact of six electron donors, bioaugmentation, addition of supplemental nutrients, and two TCE levels (0.57 and 1.90 mM or 75 and 250 mg/L in the aqueous phase) on TCE dechlorination. Performance was assessed based on the concentration changes of TCE and reductive dechlorination degradation products. The chemical data was evaluated using analysis of variance (ANOVA) and survival analysis techniques to determine both main effects and important interactions for all the experimental variables during the 203-day study. The statistically based design and analysis provided powerful tools that aided decision-making for field application of this technology. The analysis showed that emulsified vegetable oil (EVO), lactate, and methanol were the most effective electron donors, promoting rapid and complete dechlorination of TCE to ethene. Bioaugmentation and nutrient addition also had a statistically significant positive impact on TCE dechlorination. In addition, the microbial community was measured using phospholipid fatty acid analysis (PLFA) for quantification of total biomass and characterization of the community structure and quantitative polymerase chain reaction (qPCR) for enumeration of Dehalococcoides organisms (Dhc) and the vinyl chloride reductase (vcrA) gene. The highest increase in levels of total biomass and Dhc was observed in the EVO microcosms, which correlated well with the dechlorination results. Copyright © 2012 Elsevier B.V. All rights reserved.
Lee, Jounghee; Park, Sohyun
2015-01-01
Objectives The sodium content of meals provided at worksite cafeterias is greater than the sodium content of restaurant meals and home meals. The objective of this study was to assess the relationships between sodium-reduction practices, barriers, and perceptions among food service personnel. Methods We implemented a cross-sectional study by collecting data on perceptions, practices, barriers, and needs regarding sodium-reduced meals at 17 worksite cafeterias in South Korea. We implemented Chi-square tests and analysis of variance for statistical analysis. For post hoc testing, we used Bonferroni tests; when variances were unequal, we used Dunnett T3 tests. Results This study involved 104 individuals employed at the worksite cafeterias, comprised of 35 men and 69 women. Most of the participants had relatively high levels of perception regarding the importance of sodium reduction (very important, 51.0%; moderately important, 27.9%). Sodium reduction practices were higher, but perceived barriers appeared to be lower in participants with high-level perception of sodium-reduced meal provision. The results of the needs assessment revealed that the participants wanted to have more active education programs targeting the general population. The biggest barriers to providing sodium-reduced meals were use of processed foods and limited methods of sodium-reduced cooking in worksite cafeterias. Conclusion To make the provision of sodium-reduced meals at worksite cafeterias more successful and sustainable, we suggest implementing more active education programs targeting the general population, developing sodium-reduced cooking methods, and developing sodium-reduced processed foods. PMID:27169011
Michael Hoppus; Stan Arner; Andrew Lister
2001-01-01
A reduction in variance for estimates of forest area and volume in the state of Connecticut was accomplished by stratifying FIA ground plots using raw, transformed and classified Landsat Thematic Mapper (TM) imagery. A US Geological Survey (USGS) Multi-Resolution Landscape Characterization (MRLC) vegetation cover map for Connecticut was used to produce a forest/non-...
Delivery Time Variance Reduction in the Military Supply Chain
2010-03-01
Donald Rumsfeld, designated “U.S. Transportation Command as the single Department of Defense Distribution Process Owner (DPO)” (USTRANSCOM, 2004...paragraphs explain OptQuest’s 54 functionality and capabilities as described by Laguna (1997) and Glover et al. (1999) as well as the OptQuest for ARENA...throughout the solution space ( Glover et al., 1999). Heuristics are strategies (in this case algorithms) that use different techniques and available
Deconstructing Demand: The Anthropogenic and Climatic Drivers of Urban Water Consumption.
Hemati, Azadeh; Rippy, Megan A; Grant, Stanley B; Davis, Kristen; Feldman, David
2016-12-06
Cities in drought prone regions of the world such as South East Australia are faced with escalating water scarcity and security challenges. Here we use 72 years of urban water consumption data from Melbourne, Australia, a city that recently overcame a 12 year "Millennium Drought", to evaluate (1) the relative importance of climatic and anthropogenic drivers of urban water demand (using wavelet-based approaches) and (2) the relative contribution of various water saving strategies to demand reduction during the Millennium Drought. Our analysis points to conservation as a dominant driver of urban water savings (69%), followed by nonrevenue water reduction (e.g., reduced meter error and leaks in the potable distribution system; 29%), and potable substitution with alternative sources like rain or recycled water (3%). Per-capita consumption exhibited both climatic and anthropogenic signatures, with rainfall and temperature explaining approximately 55% of the variance. Anthropogenic controls were also strong (up to 45% variance explained). These controls were nonstationary and frequency-specific, with conservation measures like outdoor water restrictions impacting seasonal water use and technological innovation/changing social norms impacting lower frequency (baseline) use. The above-noted nonstationarity implies that wavelets, which do not assume stationarity, show promise for use in future predictive models of demand.
NASA Astrophysics Data System (ADS)
Masson, F.; Mouyen, M.; Hwang, C.; Wu, Y.-M.; Ponton, F.; Lehujeur, M.; Dorbath, C.
2012-11-01
Using a Bouguer anomaly map and a dense seismic data set, we have performed two studies in order to improve our knowledge of the deep structure of Taiwan. First, we model the Bouguer anomaly along a profile crossing the island using simple forward modelling. The modelling is 2D, with the hypothesis of cylindrical symmetry. Second we present a joint analysis of gravity anomaly and seismic arrival time data recorded in Taiwan. An initial velocity model has been obtained by local earthquake tomography (LET) of the seismological data. The LET velocity model was used to construct an initial 3D gravity model, using a linear velocity-density relationship (Birch's law). The synthetic Bouguer anomaly calculated for this model has the same shape and wavelength as the observed anomaly. However some characteristics of the anomaly map are not retrieved. To derive a crustal velocity/density model which accounts for both types of observations, we performed a sequential inversion of seismological and gravity data. The variance reduction of the arrival time data for the final sequential model was comparable to the variance reduction obtained by simple LET. Moreover, the sequential model explained about 80% of the observed gravity anomaly. New 3D model of Taiwan lithosphere is presented.
Handling nonresponse in surveys: analytic corrections compared with converting nonresponders.
Jenkins, Paul; Earle-Richardson, Giulia; Burdick, Patrick; May, John
2008-02-01
A large health survey was combined with a simulation study to contrast the reduction in bias achieved by double sampling versus two weighting methods based on propensity scores. The survey used a census of one New York county and double sampling in six others. Propensity scores were modeled as a logistic function of demographic variables and were used in conjunction with a random uniform variate to simulate response in the census. These data were used to estimate the prevalence of chronic disease in a population whose parameters were defined as values from the census. Significant (p < 0.0001) predictors in the logistic function included multiple (vs. single) occupancy (odds ratio (OR) = 1.3), bank card ownership (OR = 2.1), gender (OR = 1.5), home ownership (OR = 1.3), head of household's age (OR = 1.4), and income >$18,000 (OR = 0.8). The model likelihood ratio chi-square was significant (p < 0.0001), with the area under the receiver operating characteristic curve = 0.59. Double-sampling estimates were marginally closer to population values than those from either weighting method. However, the variance was also greater (p < 0.01). The reduction in bias for point estimation from double sampling may be more than offset by the increased variance associated with this method.
Using negative emotional feedback to modify risky behavior of young moped riders.
Megías, Alberto; Cortes, Abilio; Maldonado, Antonio; Cándido, Antonio
2017-05-19
The aim of this research was to investigate whether the use of messages with negative emotional content is effective in promoting safe behavior of moped riders and how exactly these messages modulate rider behavior. Participants received negative feedback when performing risky behaviors using a computer task. The effectiveness of this treatment was subsequently tested in a riding simulator. The results demonstrated how riders receiving negative feedback had a lower number of traffic accidents than a control group. The reduction in accidents was accompanied by a set of changes in the riding behavior. We observed a lower average speed and greater respect for speed limits. Furthermore, analysis of the steering wheel variance, throttle variance, and average braking force provided evidence for a more even and homogenous riding style. This greater abidance of traffic regulations and friendlier riding style could explain some of the causes behind the reduction in accidents. The use of negative emotional feedback in driving schools or advanced rider assistance systems could enhance riding performance, making riders aware of unsafe practices and helping them to establish more accurate riding habits. Moreover, the combination of riding simulators and feedback-for example, in the training of novice riders and traffic offenders-could be an efficient tool to improve their hazard perception skills and promote safer behaviors.
How Many Environmental Impact Indicators Are Needed in the Evaluation of Product Life Cycles?
Steinmann, Zoran J N; Schipper, Aafke M; Hauck, Mara; Huijbregts, Mark A J
2016-04-05
Numerous indicators are currently available for environmental impact assessments, especially in the field of Life Cycle Impact Assessment (LCIA). Because decision-making on the basis of hundreds of indicators simultaneously is unfeasible, a nonredundant key set of indicators representative of the overall environmental impact is needed. We aimed to find such a nonredundant set of indicators based on their mutual correlations. We have used Principal Component Analysis (PCA) in combination with an optimization algorithm to find an optimal set of indicators out of 135 impact indicators calculated for 976 products from the ecoinvent database. The first four principal components covered 92% of the variance in product rankings, showing the potential for indicator reduction. The same amount of variance (92%) could be covered by a minimal set of six indicators, related to climate change, ozone depletion, the combined effects of acidification and eutrophication, terrestrial ecotoxicity, marine ecotoxicity, and land use. In comparison, four commonly used resource footprints (energy, water, land, materials) together accounted for 84% of the variance in product rankings. We conclude that the plethora of environmental indicators can be reduced to a small key set, representing the major part of the variation in environmental impacts between product life cycles.
Variance of transionospheric VLF wave power absorption
NASA Astrophysics Data System (ADS)
Tao, X.; Bortnik, J.; Friedrich, M.
2010-07-01
To investigate the effects of D-region electron-density variance on wave power absorption, we calculate the power reduction of very low frequency (VLF) waves propagating through the ionosphere with a full wave method using the standard ionospheric model IRI and in situ observational data. We first verify the classic absorption curves of Helliwell's using our full wave code. Then we show that the IRI model gives overall smaller wave absorption compared with Helliwell's. Using D-region electron densities measured by rockets during the past 60 years, we demonstrate that the power absorption of VLF waves is subject to large variance, even though Helliwell's absorption curves are within ±1 standard deviation of absorption values calculated from data. Finally, we use a subset of the rocket data that are more representative of the D region of middle- and low-latitude VLF wave transmitters and show that the average quiet time wave absorption is smaller than that of Helliwell's by up to 100 dB at 20 kHz and 60 dB at 2 kHz, which would make the model-observation discrepancy shown by previous work even larger. This result suggests that additional processes may be needed to explain the discrepancy.
Associations of gender inequality with child malnutrition and mortality across 96 countries.
Marphatia, A A; Cole, T J; Grijalva-Eternod, C; Wells, J C K
2016-01-01
National efforts to reduce low birth weight (LBW) and child malnutrition and mortality prioritise economic growth. However, this may be ineffective, while rising gross domestic product (GDP) also imposes health costs, such as obesity and non-communicable disease. There is a need to identify other potential routes for improving child health. We investigated associations of the Gender Inequality Index (GII), a national marker of women's disadvantages in reproductive health, empowerment and labour market participation, with the prevalence of LBW, child malnutrition (stunting and wasting) and mortality under 5 years in 96 countries, adjusting for national GDP. The GII displaced GDP as a predictor of LBW, explaining 36% of the variance. Independent of GDP, the GII explained 10% of the variance in wasting and stunting and 41% of the variance in child mortality. Simulations indicated that reducing GII could lead to major reductions in LBW, child malnutrition and mortality in low- and middle-income countries. Independent of national wealth, reducing women's disempowerment relative to men may reduce LBW and promote child nutritional status and survival. Longitudinal studies are now needed to evaluate the impact of efforts to reduce societal gender inequality.
Retrospective analysis of a detector fault for a full field digital mammography system
NASA Astrophysics Data System (ADS)
Marshall, N. W.
2006-11-01
This paper describes objective and subjective image quality measurements acquired as part of a routine quality assurance (QA) programme for an amorphous selenium (a-Se) full field digital mammography (FFDM) system between August-04 and February-05. During this period, the FFDM detector developed a fault and was replaced. A retrospective analysis of objective image quality parameters (modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE)) is presented to try and gain a deeper understanding of the detector problem that occurred. These measurements are discussed in conjunction with routine contrast-detail (c-d) results acquired with the CDMAM (Artinis, The Netherlands) test object. There was significant reduction in MTF over this period of time indicating an increase in blurring occurring within the a-Se converter layer. This blurring was not isotropic, being greater in the data line direction (left to right across the detector) than in the gate line direction (chest wall to nipple). The initial value of the 50% MTF point was 6 mm-1; for the faulty detector the 50% MTF points occurred at 3.4 mm-1 and 1.0 mm-1 in the gate line and data line directions, respectively. Prior to NNPS estimation, variance images were formed of the detector flat field images. Spatial distribution of variance was not uniform, suggesting that the physical blurring process was not constant across the detector. This change in variance with image position implied that the stationarity of the noise statistics within the image was limited and that care would be needed when performing objective measurements. The NNPS measurements confirmed the results found for the MTF, with a strong reduction in NNPS as a function of spatial frequency. This reduction was far more severe in the data line direction. A somewhat tentative DQE estimate was made; in the gate line direction there was little change in DQE up to 2.5 mm-1 but at the Nyquist frequency the DQE had fallen to approximately 35% of the original value. There was severe attenuation of DQE in the data line direction, the DQE falling to less than 0.01 above approximately 3.0 mm-1. C-d results showed an increase in threshold contrast of approximately 25% for details less than 0.2 mm in diameter, while no reduction in c-d performance was found at the largest detail diameters (1.0 mm and above). Despite the detector fault, the c-d curve was found to pass the European protocol acceptable c-d curve.
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues ofmore » the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.« less
Computing Power of Tests of the Variance of Treatment Effects in Designs with Two Levels of Nesting
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2008-01-01
Experiments that involve nested structures may assign treatment conditions either to entire groups (such as classrooms or schools) or individuals within groups (such as students). Although typically the interest in field experiments is in determining the significance of the overall treatment effect, it is equally important to examine the…
Conditional Optimal Design in Three- and Four-Level Experiments
ERIC Educational Resources Information Center
Hedges, Larry V.; Borenstein, Michael
2014-01-01
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Mitigation of multipath effect in GNSS short baseline positioning by the multipath hemispherical map
NASA Astrophysics Data System (ADS)
Dong, D.; Wang, M.; Chen, W.; Zeng, Z.; Song, L.; Zhang, Q.; Cai, M.; Cheng, Y.; Lv, J.
2016-03-01
Multipath is one major error source in high-accuracy GNSS positioning. Various hardware and software approaches are developed to mitigate the multipath effect. Among them the MHM (multipath hemispherical map) and sidereal filtering (SF)/advanced SF (ASF) approaches utilize the spatiotemporal repeatability of multipath effect under static environment, hence they can be implemented to generate multipath correction model for real-time GNSS data processing. We focus on the spatial-temporal repeatability-based MHM and SF/ASF approaches and compare their performances for multipath reduction. Comparisons indicate that both MHM and ASF approaches perform well with residual variance reduction (50 %) for short span (next 5 days) and maintains roughly 45 % reduction level for longer span (next 6-25 days). The ASF model is more suitable for high frequency multipath reduction, such as high-rate GNSS applications. The MHM model is easier to implement for real-time multipath mitigation when the overall multipath regime is medium to low frequency.
Dynamic optimization and conformity in health behavior and life enjoyment over the life cycle
Bejarano, Hernán D.; Kaplan, Hillard; Rassenti, Stephen
2015-01-01
This article examines individual and social influences on investments in health and enjoyment from immediate consumption. Our lab experiment mimics the problem of health investment over a lifetime (Grossman, 1972a,b). Incentives to find the appropriate expenditures on life enjoyment and health are given by making in each period come period a function of previous health investments. In order to model social effects in the experiment, we randomly assigned individuals to chat/observation groups. Groups were permitted to freely chat between repeated lifetimes. Two treatments were employed: In the Independent-rewards treatment, an individual's rewards from investments in life enjoyment depend only on his choice and in the Interdependent-rewards treatment; rewards not only depend on an individual's choices but also on their similarity to the choices of the others in their group, generating a premium on conformity. The principal hypothesis is that gains from conformity increase variance in health behavior among groups and can lead to suboptimal performance. We tested three predictions and each was supported by the data: the Interdependent-rewards treatment (1) decreased within-group variance, (2) increased between-group variance, and (3) increased the likelihood of behavior far from the optimum with respect to the dynamic problem. We also test and find support for a series of subsidiary hypotheses. We found: (4) Subjects engaged in helpful chat in both treatments; (5) there was significant heterogeneity among both subjects and groups in chat frequencies; and (6) chat was most common early in the experiment, and (7) the interdependent rewards treatment increased strategic chat frequency. Incentives for conformity appear to promote prosocial behavior, but also increase variance among groups, leading to convergence on suboptimal strategies for some groups. We discuss these results in light of the growing literature focusing on social networks and health outcomes. PMID:26136666
Boundary Conditions for Scalar (Co)Variances over Heterogeneous Surfaces
NASA Astrophysics Data System (ADS)
Machulskaya, Ekaterina; Mironov, Dmitrii
2018-05-01
The problem of boundary conditions for the variances and covariances of scalar quantities (e.g., temperature and humidity) at the underlying surface is considered. If the surface is treated as horizontally homogeneous, Monin-Obukhov similarity suggests the Neumann boundary conditions that set the surface fluxes of scalar variances and covariances to zero. Over heterogeneous surfaces, these boundary conditions are not a viable choice since the spatial variability of various surface and soil characteristics, such as the ground fluxes of heat and moisture and the surface radiation balance, is not accounted for. Boundary conditions are developed that are consistent with the tile approach used to compute scalar (and momentum) fluxes over heterogeneous surfaces. To this end, the third-order transport terms (fluxes of variances) are examined analytically using a triple decomposition of fluctuating velocity and scalars into the grid-box mean, the fluctuation of tile-mean quantity about the grid-box mean, and the sub-tile fluctuation. The effect of the proposed boundary conditions on mixing in an archetypical stably-stratified boundary layer is illustrated with a single-column numerical experiment. The proposed boundary conditions should be applied in atmospheric models that utilize turbulence parametrization schemes with transport equations for scalar variances and covariances including the third-order turbulent transport (diffusion) terms.
Wiegerink, Diana J H G; Stam, Henk J; Ketelaar, Marjolijn; Cohen-Kettenis, Peggy T; Roebroeck, Marij E
2012-01-01
To study determinants of romantic relationships and sexual activity of young adults with cerebral palsy (CP), focusing on personal and environmental factors. A cohort study was performed with 74 young adults (46 men; 28 women) aged 20-25 years (SD 1.4) with CP (49% unilateral CP, 76% GMFCS level I, 85% MACS level I). All participants were of normal intelligence. Romantic relationships, sexual activity (outcome measures), personal and environmental factors (associated factors) were assessed. Associations were analyzed using logistic regression analyses. More females than males with CP were in a current romantic relationship. Self-esteem, sexual esteem and feelings of competence regarding self-efficacy contributed positively to having current romantic relationships. A negative parenting style contributed negatively. Age and gross motor functioning explained 20% of the variance in experience with intercourse. In addition, sexual esteem and taking initiative contributed significantly to intercourse experience. For young adults with CP personal factors (20-35% explained variances) seem to contribute more than environmental factors (9-12% explained variances) to current romantic relationships and sexual experiences. We advice parents and professionals to focus on self-efficacy, self-esteem and sexual self-esteem in development of young adults with CP. [ • The severity of gross motor functioning contributed somewhat to sexual activities, but not to romantic relationships.• High self-efficacy, self-esteem and sexual self-esteem can facilitate involvement in romantic and sexual relationships for young adults with CP.
Trait and State Positive Emotional Experience in Schizophrenia: A Meta-Analysis
Yan, Chao; Cao, Yuan; Zhang, Yang; Song, Li-Ling; Cheung, Eric F. C.; Chan, Raymond C. K.
2012-01-01
Background Prior meta-analyses indicated that people with schizophrenia show impairment in trait hedonic capacity but retain their state hedonic experience (valence) in laboratory-based assessments. Little is known about what is the extent of differences for state positive emotional experience (especially arousal) between people with schizophrenia and healthy controls. It is also not clear whether negative symptoms and gender effect contribute to the variance of positive affect. Methods and Findings The current meta-analysis examined 21 studies assessing state arousal experience, 40 studies measuring state valence experience, and 47studies assessing trait hedonic capacity in schizophrenia. Patients with schizophrenia demonstrated significant impairment in trait hedonic capacity (Cohen’s d = 0.81). However, patients and controls did not statistically differ in state hedonic (valence) as well as exciting (arousal) experience to positive stimuli (Cohen’s d = −0.24 to 0.06). They also reported experiencing relatively robust state aversion and calmness to positive stimuli compared with controls (Cohen’s d = 0.75, 0.56, respectively). Negative symptoms and gender contributed to the variance of findings in positive affect, especially trait hedonic capacity in schizophrenia. Conclusions Our findings suggest that schizophrenia patients have no deficit in state positive emotional experience but impairment in “noncurrent” hedonic capacity, which may be mediated by negative symptoms and gender effect. PMID:22815785
Wright, George W; Simon, Richard M
2003-12-12
Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf
Management of pediatric mandible fractures.
Goth, Stephen; Sawatari, Yoh; Peleg, Michael
2012-01-01
The pediatric mandible fracture is a rare occurrence when compared with the number of mandible fractures that occur within the adult population. Although the clinician who manages facial fractures may never encounter a pediatric mandible fracture, it is a unique injury that warrants a comprehensive discussion. Because of the unique anatomy, dentition, and growth of the pediatric patient, the management of a pediatric mandible fracture requires true diligence with a variance in treatment ranging from soft diet to open reduction and internal fixation. In addition to the variability in treatment, any trauma to the face of a child requires additional management factors including child abuse issues and long-term sequelae involving skeletal growth, which may affect facial symmetry and occlusion. The following is a review of the incidence, relevant anatomy, clinical and radiographic examination, and treatment modalities for specific fracture types of the pediatric mandible based on the clinical experience at the University of Miami/Jackson Memorial Hospital Oral and Maxillofacial Surgery program. In addition, a review of the literature regarding the management of the pediatric mandible fracture was performed to offer a more comprehensive overview of this unique subset of facial fractures.
Chen, Xi; Wu, Qi; Ren, He; Chang, Fu-Kuo
2018-01-01
In this work, a data-driven approach for identifying the flight state of a self-sensing wing structure with an embedded multi-functional sensing network is proposed. The flight state is characterized by the structural vibration signals recorded from a series of wind tunnel experiments under varying angles of attack and airspeeds. A large feature pool is created by extracting potential features from the signals covering the time domain, the frequency domain as well as the information domain. Special emphasis is given to feature selection in which a novel filter method is developed based on the combination of a modified distance evaluation algorithm and a variance inflation factor. Machine learning algorithms are then employed to establish the mapping relationship from the feature space to the practical state space. Results from two case studies demonstrate the high identification accuracy and the effectiveness of the model complexity reduction via the proposed method, thus providing new perspectives of self-awareness towards the next generation of intelligent air vehicles. PMID:29710832
Hagmann, Patric; Deco, Gustavo
2015-01-01
How a stimulus or a task alters the spontaneous dynamics of the brain remains a fundamental open question in neuroscience. One of the most robust hallmarks of task/stimulus-driven brain dynamics is the decrease of variability with respect to the spontaneous level, an effect seen across multiple experimental conditions and in brain signals observed at different spatiotemporal scales. Recently, it was observed that the trial-to-trial variability and temporal variance of functional magnetic resonance imaging (fMRI) signals decrease in the task-driven activity. Here we examined the dynamics of a large-scale model of the human cortex to provide a mechanistic understanding of these observations. The model allows computing the statistics of synaptic activity in the spontaneous condition and in putative tasks determined by external inputs to a given subset of brain regions. We demonstrated that external inputs decrease the variance, increase the covariances, and decrease the autocovariance of synaptic activity as a consequence of single node and large-scale network dynamics. Altogether, these changes in network statistics imply a reduction of entropy, meaning that the spontaneous synaptic activity outlines a larger multidimensional activity space than does the task-driven activity. We tested this model’s prediction on fMRI signals from healthy humans acquired during rest and task conditions and found a significant decrease of entropy in the stimulus-driven activity. Altogether, our study proposes a mechanism for increasing the information capacity of brain networks by enlarging the volume of possible activity configurations at rest and reliably settling into a confined stimulus-driven state to allow better transmission of stimulus-related information. PMID:26317432
Major factors influencing bacterial leaching of heavy metals (Cu and Zn) from anaerobic sludge.
Couillard, D; Chartier, M; Mercier, G
1994-01-01
Anaerobically digested sewage sludges were treated for heavy metal removal through a biological solubilization process called bacterial leaching (bioleaching). The solubilization of copper and zinc from these sludges is described in this study: using continuously stirred tank reactors with and without sludge recycling at different mean hydraulic residence times (1, 2, 3 and 4 days). Significant linear equations were established for the solubilization of zinc and copper according to relevant parameters: oxygen reduction potential (ORP), pH and residence time (t). Zinc solubilization was related to the residence time with a r2 (explained variance) of 0.82. Considering only t=2 and 3 days explained variance of 0.31 and 0.24 were found between zinc solubilization as a function of ORP and pH indicating a minor importance of those two factors for this metal in the range of pH and ORP experimented. Cu solubilization was weakly correlated to mean hydraulic residence time (r2=0.48), while it was highly correlated to ORP (r2=0.80) and pH (r2=0.62) considering only t of 2 and 3 days in the case of pH and ORP. The ORP dependence of Cu solubilization has been clearly demonstrated in this study. In addition to this, the importance of the substrate concentration for Cu solubilization has been confirmed. The hypothesis of a biological solubilization of Cu by the indirect mechanism has been supported. The results permit, under optimum conditions, the drawing of linear equations which will allow prediction of metal solubilization efficiencies from the parameters pH (Cu), ORP (Cu) and residence time (Cu and Zn), during the treatment. The linear regressions will be a useful tool for routine operation of the process.
Recovery of zinc and manganese from alkaline and zinc-carbon spent batteries
NASA Astrophysics Data System (ADS)
De Michelis, I.; Ferella, F.; Karakaya, E.; Beolchini, F.; Vegliò, F.
This paper concerns the recovery of zinc and manganese from alkaline and zinc-carbon spent batteries. The metals were dissolved by a reductive-acid leaching with sulphuric acid in the presence of oxalic acid as reductant. Leaching tests were realised according to a full factorial design, then simple regression equations for Mn, Zn and Fe extraction were determined from the experimental data as a function of pulp density, sulphuric acid concentration, temperature and oxalic acid concentration. The main effects and interactions were investigated by the analysis of variance (ANOVA). This analysis evidenced the best operating conditions of the reductive acid leaching: 70% of manganese and 100% of zinc were extracted after 5 h, at 80 °C with 20% of pulp density, 1.8 M sulphuric acid concentration and 59.4 g L -1 of oxalic acid. Both manganese and zinc extraction yields higher than 96% were obtained by using two sequential leaching steps.
Clark, Larkin; Wells, Martha H; Harris, Edward F; Lou, Jennifer
2016-01-01
To determine if aggressiveness of primary tooth preparation varied among different brands of zirconia and stainless steel (SSC) crowns. One hundred primary typodont teeth were divided into five groups (10 posterior and 10 anterior) and assigned to: Cheng Crowns (CC); EZ Pedo (EZP); Kinder Krowns (KKZ); NuSmile (NSZ); and SSC. Teeth were prepared, and assigned crowns were fitted. Teeth were weighed prior to and after preparation. Weight changes served as a surrogate measure of tooth reduction. Analysis of variance showed a significant difference in tooth reduction among brand/type for both the anterior and posterior. Tukey's honest significant difference test (HSD), when applied to anterior data, revealed that SSCs required significantly less tooth removal compared to the composite of the four zirconia brands, which showed no significant difference among them. Tukey's HSD test, applied to posterior data, revealed that CC required significantly greater removal of crown structure, while EZP, KKZ, and NSZ were statistically equivalent, and SSCs required significantly less removal. Zirconia crowns required more tooth reduction than stainless steel crowns for primary anterior and posterior teeth. Tooth reduction for anterior zirconia crowns was equivalent among brands. For posterior teeth, reduction for three brands (EZ Pedo, Kinder Krowns, NuSmile) did not differ, while Cheng Crowns required more reduction.
Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction
NASA Astrophysics Data System (ADS)
Scarnati, Theresa; Gelb, Anne
2018-04-01
In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.
Individual and population-level responses to ocean acidification.
Harvey, Ben P; McKeown, Niall J; Rastrick, Samuel P S; Bertolini, Camilla; Foggo, Andy; Graham, Helen; Hall-Spencer, Jason M; Milazzo, Marco; Shaw, Paul W; Small, Daniel P; Moore, Pippa J
2016-01-29
Ocean acidification is predicted to have detrimental effects on many marine organisms and ecological processes. Despite growing evidence for direct impacts on specific species, few studies have simultaneously considered the effects of ocean acidification on individuals (e.g. consequences for energy budgets and resource partitioning) and population level demographic processes. Here we show that ocean acidification increases energetic demands on gastropods resulting in altered energy allocation, i.e. reduced shell size but increased body mass. When scaled up to the population level, long-term exposure to ocean acidification altered population demography, with evidence of a reduction in the proportion of females in the population and genetic signatures of increased variance in reproductive success among individuals. Such increased variance enhances levels of short-term genetic drift which is predicted to inhibit adaptation. Our study indicates that even against a background of high gene flow, ocean acidification is driving individual- and population-level changes that will impact eco-evolutionary trajectories.
Beyond the Rainbow: Retrieval Practice Leads to Better Spelling than Does Rainbow Writing
ERIC Educational Resources Information Center
Jones, Angela C.; Wardlow, Liane; Pan, Steven C.; Zepeda, Cristina; Heyman, Gail D.; Dunlosky, John; Rickard, Timothy C.
2016-01-01
In three experiments, we compared the effectiveness of rainbow writing and retrieval practice, two common methods of spelling instruction. In experiment 1 (n = 14), second graders completed 2 days of spelling practice, followed by spelling tests 1 day and 5 weeks later. A repeated measures analysis of variance demonstrated that spelling accuracy…
ERIC Educational Resources Information Center
Fantozzi, Victoria B.
2013-01-01
This qualitative study examines the variance in the ways that four student teachers made meaning of the experience of being observed by their cooperating teachers and university supervisors. Using Kegan's (1994) theory of cognitive development, the study focuses on the differences in the ways the teacher candidates constructed the prospect of…
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2013-01-01
Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…
Method for simulating dose reduction in digital mammography using the Anscombe transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.
2016-06-15
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtainedmore » by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.« less
Method for simulating dose reduction in digital mammography using the Anscombe transformation
Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2016-01-01
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions. PMID:27277017
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Dasgupta, Purnendu K; Shelor, Charles Phillip; Kadjo, Akinde Florence; Kraiczek, Karsten G
2018-02-06
Following a brief overview of the emergence of absorbance detection in liquid chromatography, we focus on the dispersion caused by the absorbance measurement cell and its inlet. A simple experiment is proposed wherein chromatographic flow and conditions are held constant but a variable portion of the column effluent is directed into the detector. The temporal peak variance (σ t,obs 2 ), which increases as the flow rate (F) through the detector decreases, is found to be well-described as a quadratic function of 1 / F . This allows the extrapolation of the results to zero residence time in the detector and thence the determination of the true variance of the peak prior to the detector (this includes contribution of all preceding components). This general approach should be equally applicable to detection systems other than absorbance. We also experiment where the inlet/outlet system remains the same but the path length is varied. This allows one to assess the individual contributions of the cell itself and the inlet/outlet system.to the total observed peak. The dispersion in the cell itself has often been modeled as a flow-independent parameter, dependent only on the cell volume. Except for very long path/large volume cells, this paradigm is simply incorrect.
Siddall, James; Huebner, E Scott; Jiang, Xu
2013-01-01
This study examined the cross-sectional and prospective relationships between three sources of school-related social support (parent involvement, peer support for learning, and teacher-student relationships) and early adolescents' global life satisfaction. The participants were 597 middle school students from 1 large school in the southeastern United States who completed measures of school social climate and life satisfaction on 2 occasions, 5 months apart. The results revealed that school-related experiences in terms of social support for learning contributed substantial amounts of variance to individual differences in adolescents' satisfaction with their lives as a whole. Cross-sectional multiple regression analyses of the differential contributions of the sources of support demonstrated that family and peer support for learning contributed statistically significant, unique variance to global life satisfaction reports. Prospective multiple regression analyses demonstrated that only family support for learning continued to contribute statistically significant, unique variance to the global life satisfaction reports at Time 2. The results suggest that school-related experiences, especially family-school interactions, spill over into adolescents' overall evaluations of their lives at a time when direct parental involvement in schooling and adolescents' global life satisfaction are generally declining. Recommendations for future research and educational policies and practices are discussed. © 2013 American Orthopsychiatric Association.
Scintillation statistics measured in an earth-space-earth retroreflector link
NASA Technical Reports Server (NTRS)
Bufton, J. L.
1977-01-01
Scintillation was measured in a vertical path from a ground-based laser transmitter to the Geos 3 satellite and back to a ground-based receiver telescope and, the experimental results were compared with analytical results presented in a companion paper (Bufton, 1977). The normalized variance, the probability density function and the power spectral density of scintillation were all measured. Moments of the satellite scintillation data in terms of normalized variance were lower than expected. The power spectrum analysis suggests that there were scintillation components at frequencies higher than the 250 Hz bandwidth available in the experiment.
Big Five personality traits: are they really important for the subjective well-being of Indians?
Tanksale, Deepa
2015-02-01
This study empirically examined the relationship between the Big Five personality traits and subjective well-being (SWB) in India. SWB variables used were life satisfaction, positive affect and negative affect. A total of 183 participants in the age range 30-40 years from Pune, India, completed the personality and SWB measures. Backward stepwise regression analysis showed that the Big Five traits accounted for 17% of the variance in life satisfaction, 35% variance in positive affect and 28% variance in negative affect. Conscientiousness emerged as the strongest predictor of life satisfaction. In line with the earlier research findings, neuroticism and extraversion were found to predict negative affect and positive affect, respectively. Neither openness to experience nor agreeableness contributed to SWB. The research emphasises the need to revisit the association between personality and SWB across different cultures, especially non-western cultures. © 2014 International Union of Psychological Science.
Musical Experience, Auditory Perception and Reading-Related Skills in Children
Banai, Karen; Ahissar, Merav
2013-01-01
Background The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Methodology/Principal Findings Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Conclusions/Significance Participants’ previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case. PMID:24086654
Koen, Joshua D; Aly, Mariam; Wang, Wei-Chun; Yonelinas, Andrew P
2013-11-01
A prominent finding in recognition memory is that studied items are associated with more variability in memory strength than new items. Here, we test 3 competing theories for why this occurs-the encoding variability, attention failure, and recollection accounts. Distinguishing among these theories is critical because each provides a fundamentally different account of the processes underlying recognition memory. The encoding variability and attention failure accounts propose that old item variance will be unaffected by retrieval manipulations because the processes producing this effect are ascribed to encoding. The recollection account predicts that both encoding and retrieval manipulations that preferentially affect recollection will affect memory variability. These contrasting predictions were tested by examining the effect of response speeding (Experiment 1), dividing attention at retrieval (Experiment 2), context reinstatement (Experiment 3), and increased test delay (Experiment 4) on recognition performance. The results of all 4 experiments confirm the predictions of the recollection account and are inconsistent with the encoding variability account. The evidence supporting the attention failure account is mixed, with 2 of the 4 experiments confirming the account and 2 disconfirming the account. These results indicate that encoding variability and attention failure are insufficient accounts of memory variance and provide support for the recollection account. Several alternative theoretical accounts of the results are also considered. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Experiences of college-age youths in families with a recessive genetic condition.
Hern, Marcia J; Beery, Theresa A; Barry, Detrice G
2006-05-01
Growing up in a family with a recessive genetic condition can trigger questions about progeny effect. This study explored perceptions of family hardiness and information sharing by 18- to 21-year-olds about genetic risk. Semistructured interviews, the Family Hardiness Index (FHI), and a Family Information Sharing Analog Scale (FISAS) were used. Participants included 11 youths who had relatives with hemophilia and 4 with sickle cell anemia. Findings revealed seven themes: assimilating premature knowledge; caring for others, denying self; cautioning during development; experiencing continual sickness; feeling less than; magnifying transition experiences; and sustaining by faith. There was no significant correlation between total FHI and FISAS. However, there was a statistically significant difference in FISAS between genetic condition variance. Specifically, higher hardiness was found and information sharing correlated among college youths in families with hemophilia. Additional research can lead to nursing interventions to provide genetic information to youths in families for illness variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Church, J; Slaughter, D; Norman, E
Error rates in a cargo screening system such as the Nuclear Car Wash [1-7] depend on the standard deviation of the background radiation count rate. Because the Nuclear Car Wash is an active interrogation technique, the radiation signal for fissile material must be detected above a background count rate consisting of cosmic, ambient, and neutron-activated radiations. It was suggested previously [1,6] that the Corresponding negative repercussions for the sensitivity of the system were shown. Therefore, to assure the most accurate estimation of the variation, experiments have been performed to quantify components of the actual variance in the background count rate,more » including variations in generator power, irradiation time, and container contents. The background variance is determined by these experiments to be a factor of 2 smaller than values assumed in previous analyses, resulting in substantially improved projections of system performance for the Nuclear Car Wash.« less
Prediction of Cutting Force in Turning Process-an Experimental Approach
NASA Astrophysics Data System (ADS)
Thangarasu, S. K.; Shankar, S.; Thomas, A. Tony; Sridhar, G.
2018-02-01
This Paper deals with a prediction of Cutting forces in a turning process. The turning process with advanced cutting tool has a several advantages over grinding such as short cycle time, process flexibility, compatible surface roughness, high material removal rate and less environment problems without the use of cutting fluid. In this a full bridge dynamometer has been used to measure the cutting forces over mild steel work piece and cemented carbide insert tool for different combination of cutting speed, feed rate and depth of cut. The experiments are planned based on taguchi design and measured cutting forces were compared with the predicted forces in order to validate the feasibility of the proposed design. The percentage contribution of each process parameter had been analyzed using Analysis of Variance (ANOVA). Both the experimental results taken from the lathe tool dynamometer and the designed full bridge dynamometer were analyzed using Taguchi design of experiment and Analysis of Variance.
Penalized weighted least-squares approach for low-dose x-ray computed tomography
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.
Control algorithms for dynamic attenuators
Hsieh, Scott S.; Pelc, Norbert J.
2014-01-01
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods. PMID:24877818
Poellmann, Katja; Mitterer, Holger; McQueen, James M.
2014-01-01
Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., minderij instead of binderij, “book binder”) and a syllabic reduction group was exposed to full-vowel deletions (e.g., p'raat instead of paraat, “ready”), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 and 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations. PMID:24910622
NASA Astrophysics Data System (ADS)
Liu, WenXiang; Mou, WeiHua; Wang, FeiXue
2012-03-01
As the introduction of triple-frequency signals in GNSS, the multi-frequency ionosphere correction technology has been fast developing. References indicate that the triple-frequency second order ionosphere correction is worse than the dual-frequency first order ionosphere correction because of the larger noise amplification factor. On the assumption that the variances of three frequency pseudoranges were equal, other references presented the triple-frequency first order ionosphere correction, which proved worse or better than the dual-frequency first order correction in different situations. In practice, the PN code rate, carrier-to-noise ratio, parameters of DLL and multipath effect of each frequency are not the same, so three frequency pseudorange variances are unequal. Under this consideration, a new unequal-weighted triple-frequency first order ionosphere correction algorithm, which minimizes the variance of the pseudorange ionosphere-free combination, is proposed in this paper. It is found that conventional dual-frequency first-order correction algorithms and the equal-weighted triple-frequency first order correction algorithm are special cases of the new algorithm. A new pseudorange variance estimation method based on the three carrier combination is also introduced. Theoretical analysis shows that the new algorithm is optimal. The experiment with COMPASS G3 satellite observations demonstrates that the ionosphere-free pseudorange combination variance of the new algorithm is smaller than traditional multi-frequency correction algorithms.
NASA Astrophysics Data System (ADS)
Gamm, Ute A.; Huang, Brendan K.; Mis, Emily K.; Khokha, Mustafa K.; Choma, Michael A.
2017-04-01
Mucociliary flow is an important defense mechanism in the lung to remove inhaled pathogens and pollutants. A disruption of ciliary flow can lead to respiratory infections. Even though patients in the intensive care unit (ICU) either have or are very susceptible to respiratory infections, mucociliary flow is not well understood in the ICU setting. We recently demonstrated that hyperoxia, a consequence of administering supplemental oxygen to a patient in respiratory failure, can lead to a significant reduction of cilia-driven fluid flow in mouse trachea. There are other factors that are relevant to ICU medicine that can damage the ciliated tracheal epithelium, including inhalation injury and endotracheal tube placement. In this study we use two animal models, Xenopus embryo and ex vivo mouse trachea, to analyze flow defects in the injured ciliated epithelium. Injury is generated either mechanically with a scalpel or chemically by calcium chloride (CaCl2) shock, which efficiently but reversibly deciliates the embryo skin. In this study we used optical coherence tomography (OCT) and particle tracking velocimetry (PTV) to quantify cilia driven fluid flow over the surface of the Xenopus embryo. We additionally visualized damage to the ciliated epithelium by capturing 3D speckle variance images that highlight beating cilia. Mechanical injury disrupted cilia-driven fluid flow over the injured site, which led to a reduction in cilia-driven fluid flow over the whole surface of the embryo (n=7). The calcium chloride shock protocol proved to be highly effective in deciliating embryos (n=6). 3D speckle variance images visualized a loss of cilia and cilia-driven flow was halted immediately after application. We also applied CaCl2-shock to cultured ex vivo mouse trachea (n=8) and found, similarly to effects in Xenopus embryo, an extensive loss of cilia with resulting cessation of flow. We investigated the regeneration of the ciliated epithelium after an 8 day incubation period, and found that cilia had regrown and flow was completely restored. In conclusion, OCT is a valuable tool to visualize injury of the ciliated epithelium and to quantify reduction of generated flow. This method allows for systematic investigation of focal and diffuse injury of the ciliated epithelium and the assessment of mechanisms to compensate for loss of flow.
Hydrocortisone Cream to Reduce Perineal Pain after Vaginal Birth: A Randomized Controlled Trial.
Manfre, Margaret; Adams, Donita; Callahan, Gloria; Gould, Patricia; Lang, Susan; McCubbins, Holly; Mintz, Amy; Williams, Sommer; Bishard, Mark; Dempsey, Amy; Chulay, Marianne
2015-01-01
To determine if the use of hydrocortisone cream decreases perineal pain in the immediate postpartum period. This was a randomized controlled trial (RCT), crossover study design, with each participant serving as their own control. Participants received three different methods for perineal pain management at three sequential perineal pain treatments after birth: two topical creams (corticosteroid; placebo) and a control treatment (no cream application). Treatment order was randomly assigned, with participants and investigators blinded to cream type. The primary dependent variable was the change in perineal pain levels (posttest minus pretest pain levels) immediately before and 30 to 60 minutes after perineal pain treatments. Data were analyzed with analysis of variance, with p < 0.05 considered significant. A total of 27 participants completed all three perineal pain treatments over a 12-hour period. A reduction in pain was found after application of both the topical creams, with average perineal pain change scores of -4.8 ± 8.4 mm after treatment with hydrocortisone cream (N = 27) and -6.7 ± 13.0 mm after treatment with the placebo cream (N = 27). Changes in pain scores with no cream application were 1.2 ± 10.5 mm (N = 27). Analysis of variance found a significant difference between treatment groups (F2,89 = 3.6, p = 0.03), with both cream treatments having significantly better pain reduction than the control, no cream treatment (hydrocortisone vs. no cream, p = 0.04; placebo cream vs. no cream, p = 0.01). There were no differences in perineal pain reduction between the two cream treatments (p = .54). This RCT found that the application of either hydrocortisone cream or placebo cream provided significantly better pain relief than no cream application.
Ribeiro, Daniel Cury; de Castro, Marcelo Peduzzi; Sole, Gisela; Vicenzino, Bill
2016-04-01
Manual therapy enhances pain-free range of motion and reduces pain levels, but its effect on shoulder muscle activity is unclear. This study aimed to assess the effects of a sustained glenohumeral postero-lateral glide during elevation on shoulder muscle activity. Thirty asymptomatic individuals participated in a repeated measures study of the electromyographic activity of the supraspinatus, infraspinatus, posterior deltoid, and middle deltoid. Participants performed four sets of 10 repetitions of shoulder scaption and abduction with and without a glide of the glenohumeral joint. Repeated-measures multivariate analysis of variance (MANOVA) was used to assess the effects of movement direction (scaption and abduction), and condition (with and without glide) (within-subject factors) on activity level of each muscle (dependent variables). Significant MANOVAs were followed-up with repeated-measures one-way analysis of variance. During shoulder scaption with glide, the supraspinatus showed a reduction of 4.1% maximal isometric voluntary contraction (MVIC) (95% CI 2.4, 5.8); and infraspinatus 1.3% MVIC (95% CI 0.5, 2.1). During shoulder abduction with a glide, supraspinatus presented a reduction of 2.5% MVIC (95% CI 1.1, 4.0), infraspinatus 2.1% MVIC (95% CI 1.0, 3.2), middle deltoid 2.2% MVIC (95% CI = 0.4, 4.1), posterior deltoid 2.1% MVIC (95% CI 1.3, 2.8). In asymptomatic individuals, sustained glide reduced shoulder muscle activity compared to control conditions. This might be useful in enhancing shoulder movement in clinical populations. Reductions in muscle activity might result from altered joint mechanics, including simply helping to lift the arm, and/or through changing afferent sensory input about the shoulder. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Buckner, Steven A.
The Helicopter Emergency Medical Service (HEMS) industry has a significant role in the transportation of injured patients, but has experienced more accidents than all other segments of the aviation industry combined. With the objective of addressing this discrepancy, this study assesses the effect of safety management systems implementation and aviation technologies utilization on the reduction of HEMS accident rates. Participating were 147 pilots from Federal Aviation Regulations Part 135 HEMS operators, who completed a survey questionnaire based on the Safety Culture and Safety Management System Survey (SCSMSS). The study assessed the predictor value of SMS implementation and aviation technologies to the frequency of HEMS accident rates with correlation and multiple linear regression. The correlation analysis identified three significant positive relationships. HEMS years of experience had a high significant positive relationship with accident rate (r=.90; p<.05); SMS had a moderate significant positive relationship to Night Vision Goggles (NVG) (r=.38; p<.05); and SMS had a slight significant positive relationship with Terrain Avoidance Warning System (TAWS) (r=.234; p<.05). Multiple regression analysis suggested that when combined with NVG, TAWS, and SMS, HEMS years of experience explained 81.4% of the variance in accident rate scores (p<.05), and HEMS years of experience was found to be a significant predictor of accident rates (p<.05). Additional quantitative regression analysis was recommended to replicate the results of this study and to consider the influence of these variables for continued reduction of HEMS accidents, and to induce execution of SMS and aviation technologies from a systems engineering application. Recommendations for practice included the adoption of existing regulatory guidance for a SMS program. A qualitative analysis was also recommended for future study SMS implementation and HEMS accident rate from the pilot's perspective. A quantitative longitudinal study would further explore inferential relationships between the study variables. Current strategies should include the increased utilization of available aviation technology resources as this proactive stance may be beneficial for the establishment of an effective safety culture within the HEMS industry.
Möldner, Meike; Unglaub, Frank; Hahn, Peter; Müller, Lars P; Bruckner, Thomas; Spies, Christian K
2015-02-01
To investigate functional and subjective outcome parameters after arthroscopic debridement of central articular disc lesions (Palmer type 2C) and to correlate these findings with ulna length. Fifty patients (15 men; 35 women; mean age, 47 y) with Palmer type 2C lesions underwent arthroscopic debridement. Nine of these patients (3 men; 6 women; mean static ulnar variance, 2.4 mm; SD, 0.5 mm) later underwent ulnar shortening osteotomy because of persistent pain and had a mean follow-up of 36 months. Mean follow-up was 38 months for patients with debridement only (mean static ulnar variance, 0.5 mm; SD, 1.2 mm). Examination parameters included range of motion, grip and pinch strengths, pain (visual analog scale), and functional outcome scores (Modified Mayo Wrist score [MMWS] and Disabilities of the Arm, Shoulder, and Hand [DASH] questionnaire). Patients who had debridement only reached a DASH questionnaire score of 18 and an MMWS of 89 with significant pain reduction from 7.6 to 2.0 on the visual analog scale. Patients with additional ulnar shortening reached a DASH questionnaire score of 18 and an MMWS of 88, with significant pain reduction from 7.4 to 2.5. Neither surgical treatment compromised grip and pinch strength in comparison with the contralateral side. We identified 1.8 mm or more of positive ulnar variance as an indication for early ulnar shortening in the case of persistent ulnar-sided wrist pain after arthroscopic debridement. Arthroscopic debridement was a sufficient and reliable treatment option for the majority of patients with Palmer type 2C lesions. Because reliable predictors of the necessity for ulnar shortening are lacking, we recommend arthroscopic debridement as a first-line treatment for all triangular fibrocartilage 2C lesions, and, in the presence of persistent ulnar-sided wrist pain, ulnar shortening osteotomy after an interval of 6 months. Ulnar shortening proved to be sufficient and safe for these patients. Patients with persistent ulnar-sided wrist pain after debridement who had preoperative static positive ulnar variance of 1.8 mm or more may be treated by ulnar shortening earlier in order to spare them prolonged symptoms. Therapeutic IV. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Success Probability Analysis for Shuttle Based Microgravity Experiments
NASA Technical Reports Server (NTRS)
Liou, Ying-Hsin Andrew
1996-01-01
Presented in this report are the results of data analysis of shuttle-based microgravity flight experiments. Potential factors were identified in the previous grant period, and in this period 26 factors were selected for data analysis. In this project, the degree of success was developed and used as the performance measure. 293 of the 391 experiments in Lewis Research Center Microgravity Database were assigned degrees of success. The frequency analysis and the analysis of variance were conducted to determine the significance of the factors that effect the experiment success.
MC21 analysis of the MIT PWR benchmark: Hot zero power results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.
2013-07-01
MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
Undergraduate Navigator Training Attrition Study
1975-11-01
stabilization. The Masculinity- Feminity Scale (SVIB), significant at the .05 level, contributed 1.73% to the predicted variance. High scores (those...8217 a iiiftihlilfft-tMJ ^^mm^mmmwm^mmmmm mmmmm Do you have extensive experience in athletic competition? If so, what sport (s) and what kind of...machinery? For example, farm equipment, construction equipment. Do you have extensive experience in athletic competition? If so, what sport (s) and what
De Boer, Dolf; Delnoij, Diana; Rademakers, Jany
2010-01-01
Abstract Background Patient‐given global ratings are frequently interpreted as summary measures of the patient perspective, with limited understanding of what these ratings summarize. Global ratings may be determined by patient experiences on priority aspects of care. Objectives (i) identify patient priorities regarding elements of care for breast cancer, hip‐ or knee surgery, cataract surgery, rheumatoid arthritis and diabetes, (ii) establish whether experiences regarding priorities are associated with patient‐given global ratings, and (iii) determine whether patient experiences regarding priorities are better predictors of global ratings than experiences concerning less important aspects of care. Setting and participants Data collected for the development of five consumer quality index surveys – disease‐specific questionnaires that capture patient experiences and priorities – were used. Results Priorities varied: breast cancer patients for example, prioritized rapid access to care and diagnostics, while diabetics favoured dignity and appropriate frequency of tests. Experiences regarding priorities were inconsistently related to global ratings of care. Regression analyses indicated that demographics explain 2.4–8.4% of the variance in global rating. Introducing patient experiences regarding priorities increased the variance explained to 21.1–35.1%; models with less important aspects of care explained 11.8–23.2%. Conclusions Some experiences regarding priorities are strongly related to the global rating while others are poorly related. Global ratings are marginally dependent on demographics, and experiences regarding priorities are somewhat better predictors of global rating than experiences regarding less important elements. As it remains to be fully determined what global ratings summarize, caution is warranted when using these ratings as summary measures. PMID:20550597
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uresk, D.W.; Gilbert, R.O.; Rickard, W.H.
Big sagebrush (Artemisia tridentata) was subjected to a double sampling procedure to obtain reliable phytomass estimates for leaves, flowering stalks, live wood, dead wood, various combinations of the preceeding, and total phytomass. Coefficients of determination (R/sup 2/) between the independent variable and various phytomass categories ranged from 0.45 to 0.93. Total phytomass was approximately 69 +- 16 (+- S.E.) g/m/sup 2/. Reductions in the variance of the phytomass estimates ranged from 33 percent to 80 percent using double sampling assuming optimum allocation. (auth)
Tactical Implications of Air Blast Variations from Nuclear Tests
1976-11-30
work com- pleted under Contract ODlA 001-76-C-0284. The objective of this analysis was to assess the rationale for additional underground tests ( UGT ) to...applications wore based, and additional applications of the methodology for a more complete assessment of the UGT rationale. This report summarizes work...corresponding to a 25 percent to 50 percent reduction in yield. The maximum improvement possible through UGT is, of course, when the variance in the weapon
NASA Astrophysics Data System (ADS)
El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.
2015-10-01
A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.
A pediatric correlational study of stride interval dynamics, energy expenditure and activity level.
Ellis, Denine; Sejdic, Ervin; Zabjek, Karl; Chau, Tom
2014-08-01
The strength of time-dependent correlations known as stride interval (SI) dynamics has been proposed as an indicator of neurologically healthy gait. Most recently, it has been hypothesized that these dynamics may be necessary for gait efficiency although the supporting evidence to date is scant. The current study examines over-ground SI dynamics, and their relationship with the cost of walking and physical activity levels in neurologically healthy children aged nine to 15 years. Twenty participants completed a single experimental session consisting of three phases: 10 min resting, 15 min walking and 10 min recovery. The scaling exponent (α) was used to characterize SI dynamics while net energy cost was measured using a portable metabolic cart, and physical activity levels were determined based on a 7-day recall questionnaire. No significant linear relationships were found between a and the net energy cost measures (r < .07; p > .25) or between α and physical activity levels (r = .01, p = .62). However, there was a marked reduction in the variance of α as activity levels increased. Over-ground stride dynamics do not appear to directly reflect energy conservation of gait in neurologically healthy youth. However, the reduction in the variance of α with increasing physical activity suggests a potential exercise-moderated convergence toward a level of stride interval persistence for able-bodied youth reported in the literature. This latter finding warrants further investigation.
Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C
2007-01-01
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less
Hamonts, Kelly; Ryngaert, Annemie; Smidt, Hauke; Springael, Dirk; Dejonghe, Winnie
2014-03-01
Chlorinated aliphatic hydrocarbons (CAHs) often discharge into rivers as contaminated groundwater baseflow. As biotransformation of CAHs in the impacted river sediments might be an effective remediation strategy, we investigated the determinants of the microbial community structure of eutrophic, CAH-polluted sediments of the Zenne River. Based on PCR-DGGE analysis, a high diversity of Bacteria, sulfate-reducing bacteria, Geobacteraceae, methanogenic archaea, and CAH-respiring Dehalococcoides was found. Depth in the riverbed, organic carbon content, CAH content and texture of the sediment, pore water temperature and conductivity, and concentrations of toluene and methane significantly contributed to the variance in the microbial community structure. On a meter scale, CAH concentrations alone explained only 6% of the variance in the Dehalococcoides and sulfate-reducing communities. On a cm-scale, however, CAHs explained 14.5-35% of the variation in DGGE profiles of Geobacteraceae, methanogens, sulfate-reducing bacteria, and Bacteria, while organic carbon content explained 2-14%. Neither the presence of the CAH reductive dehalogenase genes tceA, bvcA, and vcrA, nor the community structure of the targeted groups significantly differed between riverbed locations showing either no attenuation or reductive dechlorination, indicating that the microbial community composition was not a limiting factor for biotransformation in the Zenne sediments. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
Allen, Scott L; McGuigan, Katrina; Connallon, Tim; Blows, Mark W; Chenoweth, Stephen F
2017-10-01
A proposed benefit to sexual selection is that it promotes purging of deleterious mutations from populations. For this benefit to be realized, sexual selection, which is usually stronger on males, must purge mutations deleterious to both sexes. Here, we experimentally test the hypothesis that sexual selection on males purges deleterious mutations that affect both male and female fitness. We measured male and female fitness in two panels of spontaneous mutation-accumulation lines of the fly, Drosophila serrata, each established from a common ancestor. One panel of mutation accumulation lines limited both natural and sexual selection (LS lines), whereas the other panel limited natural selection, but allowed sexual selection to operate (SS lines). Although mutation accumulation caused a significant reduction in male and female fitness in both the LS and SS lines, sexual selection had no detectable effect on the extent of the fitness reduction. Similarly, despite evidence of mutational variance for fitness in males and females of both treatments, sexual selection had no significant impact on the amount of mutational genetic variance for fitness. However, sexual selection did reshape the between-sex correlation for fitness: significantly strengthening it in the SS lines. After 25 generations, the between-sex correlation for fitness was positive but considerably less than one in the LS lines, suggesting that, although most mutations had sexually concordant fitness effects, sex-limited, and/or sex-biased mutations contributed substantially to the mutational variance. In the SS lines this correlation was strong and could not be distinguished from unity. Individual-based simulations that mimick the experimental setup reveal two conditions that may drive our results: (1) a modest-to-large fraction of mutations have sex-limited (or highly sex-biased) fitness effects, and (2) the average fitness effect of sex-limited mutations is larger than the average fitness effect of mutations that affect both sexes similarly. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2018-04-01
For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.
Seizing an opportunity: increasing use of cessation services following a tobacco tax increase.
Keller, Paula A; Greenseid, Lija O; Christenson, Matthew; Boyle, Raymond G; Schillo, Barbara A
2015-04-10
Tobacco tax increases are associated with increases in quitline calls and reductions in smoking prevalence. In 2013, ClearWay Minnesota(SM) conducted a six-week media campaign promoting QUITPLAN® Services (QUITPLAN Helpline and quitplan.com) to leverage the state's tax increase. The purpose of this study was to ascertain the association of the tax increase and media campaign on call volumes, web visits, and enrollments in QUITPLAN Services. In this observational study, call volume, web visits, enrollments, and participant characteristics were analyzed for the periods June-August 2012 and June-August 2013. Enrollment data and information about media campaigns were analyzed using multivariate regression analysis to determine the association of the tax increase on QUITPLAN Services while controlling for media. There was a 160% increase in total combined calls and web visits, and an 81% increase in enrollments in QUITPLAN Services. Helpline call volumes and enrollments declined back to prior year levels approximately six weeks after the tax increase. Visits to and enrollments in quitplan.com also declined, but increased again in mid-August. The tax increase and media explained over 70% of variation in enrollments in the QUITPLAN Helpline, with media explaining 34% of the variance and the tax increase explaining an additional 36.1% of this variance. However, media explained 64% of the variance in quitplan.com enrollments, and the tax increase explained an additional 7.6% of this variance. Since tax increases occur infrequently, these policy changes must be fully leveraged as quickly as possible to help reduce prevalence.
Tarazi, R; Sebbenn, A M; Kageyama, P Y; Vencovsky, R
2013-01-01
Edge effects may affect the mating system of tropical tree species and reduce the genetic diversity and variance effective size of collected seeds at the boundaries of forest fragments because of a reduction in the density of reproductive trees, neighbour size and changes in the behaviour of pollinators. Here, edge effects on the genetic diversity, mating system and pollen pool of the insect-pollinated Neotropical tree Copaifera langsdorffii were investigated using eight microsatellite loci. Open-pollinated seeds were collected from 17 seed trees within continuous savannah woodland (SW) and were compared with seeds from 11 seed trees at the edge of the savannah remnant. Seeds collected from the SW had significantly higher heterozygosity levels (Ho=0.780; He=0.831) than seeds from the edge (Ho=0.702; He=0.800). The multilocus outcrossing rate was significantly higher in the SW (tm=0.859) than in the edge (tm=0.759). Pollen pool differentiation was significant, however, it did not differ between the SW (=0.105) and the edge (=0.135). The variance effective size within the progenies was significantly higher in the SW (Ne=2.65) than at the edge (Ne=2.30). The number of seed trees to retain the reference variance effective size of 500 was 189 at the SW and 217 at the edge. Therefore, it is preferable that seed harvesting for conservation and environmental restoration strategies be conducted in the SW, where genetic diversity and variance effective size within progenies are higher. PMID:23486081
Tarazi, R; Sebbenn, A M; Kageyama, P Y; Vencovsky, R
2013-06-01
Edge effects may affect the mating system of tropical tree species and reduce the genetic diversity and variance effective size of collected seeds at the boundaries of forest fragments because of a reduction in the density of reproductive trees, neighbour size and changes in the behaviour of pollinators. Here, edge effects on the genetic diversity, mating system and pollen pool of the insect-pollinated Neotropical tree Copaifera langsdorffii were investigated using eight microsatellite loci. Open-pollinated seeds were collected from 17 seed trees within continuous savannah woodland (SW) and were compared with seeds from 11 seed trees at the edge of the savannah remnant. Seeds collected from the SW had significantly higher heterozygosity levels (Ho=0.780; He=0.831) than seeds from the edge (Ho=0.702; He=0.800). The multilocus outcrossing rate was significantly higher in the SW (tm=0.859) than in the edge (tm=0.759). Pollen pool differentiation was significant, however, it did not differ between the SW (=0.105) and the edge (=0.135). The variance effective size within the progenies was significantly higher in the SW (Ne=2.65) than at the edge (Ne=2.30). The number of seed trees to retain the reference variance effective size of 500 was 189 at the SW and 217 at the edge. Therefore, it is preferable that seed harvesting for conservation and environmental restoration strategies be conducted in the SW, where genetic diversity and variance effective size within progenies are higher.
Litzow, Michael A.; Piatt, John F.
2003-01-01
We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.
Hill, Mary C.
2010-01-01
Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.
Vilos, George A; Vilos, Angelos G; Abu-Rafea, Basim; Pron, Gaylene; Kozak, Roman; Garvin, Greg
2006-05-01
To determine if goserelin immediately after uterine artery embolization (UAE) affected myoma reduction. Randomized pilot study (level 1). Teaching hospital. Twenty-six women. All patients underwent UAE, and then 12 patients received 10.8 mg of goserelin 24 hours later. The treatment group was 5 years older: 43 versus 37.7 years. Uterine and myoma volumes were measured by ultrasound 2 weeks before UAE and at 3, 6, and 12 months. Uterine and fibroid volumes. Pretreatment uterine volume was 477 versus 556 cm3, and dominant fibroid volume was 257 versus 225 cm3 in the control versus goserelin groups. Analysis of variance measurements indicated that the change over time did not significantly differ between the two groups. By 12 months, the control group had a mean uterine volume reduction of 58%, while the goserelin group had a reduction of 45%. Dominant fibroid changes over time did not differ between the two groups. At 12 months, the mean fibroid volume had decreased by 86% and 58% in the control and goserelin groups, respectively. The addition of goserelin therapy to UAE did not alter the reduction rate or volume of uterine myomas.
Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.
Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-12-30
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.
Comparing Binaural Pre-processing Strategies I
Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-01-01
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920
The effect of deep-tissue massage therapy on blood pressure and heart rate.
Kaye, Alan David; Kaye, Aaron J; Swinford, Jan; Baluch, Amir; Bawcom, Brad A; Lambert, Thomas J; Hoover, Jason M
2008-03-01
In the present study, we describe the effects of deep tissue massage on systolic, diastolic, and mean arterial blood pressure. The study involved 263 volunteers (12% males and 88% females), with an average age of 48.5. Overall muscle spasm/muscle strain was described as either moderate or severe for each patient. Baseline blood pressure and heart rate were measured via an automatic blood pressure cuff. Twenty-one (21) different soothing CDs played in the background as the deep tissue massage was performed over the course of the study. The massages were between 45 and 60 minutes in duration. The data were analyzed using analysis of variance with post-hoc Scheffe's F-test. Results of the present study demonstrated an average systolic pressure reduction of 10.4 mm Hg (p<0.06), a diastolic pressure reduction of 5.3 mm Hg (p<0.04), a mean arterial pressure reduction of 7.0 mm Hg (p<0.47), and an average heart rate reduction of 10.8 beats per minute (p<0.0003), respectively. Additional scientific research in this area is warranted.
Congdon, Jayme L.; Adler, Nancy E.; Epel, Elissa S.; Laraia, Barbara A.; Bush, Nicole R.
2017-01-01
Introduction Few studies have examined prenatal mood as a means to identify women at risk for negative childbirth experiences. We explore associations between prenatal mood and birth perceptions in a socioeconomically diverse, American sample. Methods We conducted a prospective study of 136 predominantly low-income and ethnic minority women of mixed parity. Prenatal measures of perceived stress, pregnancy-related anxiety, and depressive symptoms were used to predict maternal perceptions of birth experiences one month postpartum using the Childbirth Experience Questionnaire (CEQ; 1). Results After adjusting for sociodemographic variables and mode of delivery, higher third trimester stress predicted worse CEQ total scores. This association was predominantly explained by two CEQ domains: own capacity (e.g. feelings of control and capability) and perceived safety. Pregnancy-related anxiety and depressive symptoms correlated with perceived stress, though neither independently predicted birth experience. Unplanned cesareans were associated with a worse CEQ total score. Vaginal delivery predicted greater perceived safety. Altogether, sociodemographic covariates, mode of delivery, and prenatal mood accounted for 35% of the variance in birth experience (p<.001). Discussion Our finding that prenatal stress explains a significant and likely clinically meaningful proportion of the variance in birth experience suggests that women perceive and recall their birth experiences through a lens that is partially determined by preexisting personal circumstances and emotional reserves. Since childbirth perceptions have implications for maternal and child health, patient satisfaction, and healthcare expenditures, these findings warrant consideration of prenatal stress screening to target intervention for women at risk for negative birth experiences. PMID:26948850
Congdon, Jayme L; Adler, Nancy E; Epel, Elissa S; Laraia, Barbara A; Bush, Nicole R
2016-06-01
Few studies have examined prenatal mood as a means to identify women at risk for negative childbirth experiences. We explore associations between prenatal mood and birth perceptions in a socioeconomically diverse, American sample. We conducted a prospective study of 136 predominantly low-income and ethnic minority women of mixed parity. Prenatal measures of perceived stress, pregnancy-related anxiety, and depressive symptoms were used to predict maternal perceptions of birth experiences 1 month postpartum, using the childbirth experience questionnaire (CEQ; 1). After adjusting for sociodemographic variables and mode of delivery, higher third-trimester stress predicted worse CEQ total scores. This association was predominantly explained by two CEQ domains: own capacity (e.g., feelings of control and capability), and perceived safety. Pregnancy-related anxiety and depressive symptoms correlated with perceived stress, though neither independently predicted birth experience. An unplanned cesarean delivery was associated with a worse CEQ total score. Vaginal delivery predicted greater perceived safety. Altogether, sociodemographic covariates, mode of delivery, and prenatal mood accounted for 35 percent of the variance in birth experience (p < 0.001). Our finding that prenatal stress explains a significant and likely clinically meaningful proportion of the variance in birth experience suggests that women perceive and recall their birth experiences through a lens that is partially determined by preexisting personal circumstances and emotional reserves. Since childbirth perceptions have implications for maternal and child health, patient satisfaction, and health care expenditures, these findings warrant consideration of prenatal stress screening to target intervention for women at risk for negative birth experiences. © 2016 Wiley Periodicals, Inc.
The spatial variability of coastal surface water temperature during upwelling. [in Lake Superior
NASA Technical Reports Server (NTRS)
Scarpace, F. L.; Green, T., III
1979-01-01
Thermal scanner imagery acquired during a field experiment designed to study an upwelling event in Lake Superior is investigated. Temperature data were measured by the thermal scanner, with a spatial resolution of 7 m. These data were correlated with temperatures measured from boats. One- and two-dimensional Fourier transforms of the data were calculated and temperature variances as a function of wavenumber were plotted. A k-to-the-minus-three dependence of the temperature variance on wavenumber was found in the wavenumber range of 1-25/km. At wavenumbers greater than 25/km, a k-to-the-minus-five-thirds dependence was found.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.
ERIC Educational Resources Information Center
Nyroos, Mikaela; Korhonen, Johan; Peng, Aihui; Linnanmäki, Karin; Svens-Liavåg, Camilla; Bagger, Anette; Sjöberg, Gunnar
2015-01-01
While test anxiety has been studied extensively, little consideration has been given to the cultural impacts of children's experiences and expressions of test anxiety. The aim of this work was to examine whether variance in test anxiety scores can be predicted based on gender and cultural setting. Three hundred and ninety-eight pupils in Grade 3…
Influence of perceived and actual neighbourhood disorder on common mental illness.
Polling, C; Khondoker, M; Hatch, S L; Hotopf, M
2014-06-01
Fear of crime and perceived neighbourhood disorder have been linked to common mental illness (CMI). However, few UK studies have also considered the experience of crime at the individual and neighbourhood level. This study aims to identify individual and local area factors associated with increased perceived neighbourhood disorder and test associations between CMI and individuals' perceptions of disorder in their neighbourhoods, personal experiences of crime and neighbourhood crime rates. A cross-sectional survey was conducted of 1,698 adults living in 1,075 households in Lambeth and Southwark, London. CMI was assessed using the Revised Clinical Interview Schedule. Data were analysed using multilevel logistic regression with neighbourhood defined as lower super output area. Individuals who reported neighbourhood disorder were more likely to suffer CMI (OR 2.12) as were those with individual experience of crime. These effects remained significant when individual characteristics were controlled for. While 14 % of the variance in perceived neighbourhood disorder occurred at the neighbourhood level, there was no significant variance at this level for CMI. Perceived neighbourhood disorder is more common in income-deprived areas and individuals who are unemployed. Worry about one's local area and individual experience of crime are strongly and independently associated with CMI, but neighbourhood crime rates do not appear to impact on mental health.
Bradley, Pat; Cunningham, Teresa; Lowell, Anne; Nagel, Tricia; Dunn, Sandra
2017-02-01
There is a paucity of research exploring Indigenous women's experiences in acute mental health inpatient services in Australia. Even less is known of Indigenous women's experience of seclusion events, as published data are rarely disaggregated by both indigeneity and gender. This research used secondary analysis of pre-existing datasets to identify any quantifiable difference in recorded experience between Indigenous and non-Indigenous women, and between Indigenous women and Indigenous men in an acute mental health inpatient unit. Standard separation data of age, length of stay, legal status, and discharge diagnosis were analysed, as were seclusion register data of age, seclusion grounds, and number of seclusion events. Descriptive statistics were used to summarize the data, and where warranted, inferential statistical methods used SPSS software to apply analysis of variance/multivariate analysis of variance testing. The results showed evidence that secondary analysis of existing datasets can provide a rich source of information to describe the experience of target groups, and to guide service planning and delivery of individualized, culturally-secure mental health care at a local level. The results are discussed, service and policy development implications are explored, and suggestions for further research are offered. © 2016 Australian College of Mental Health Nurses Inc.
Potential Predictability of the Monsoon Subclimate Systems
NASA Technical Reports Server (NTRS)
Yang, Song; Lau, K.-M.; Chang, Y.; Schubert, S.
1999-01-01
While El Nino/Southern Oscillation (ENSO) phenomenon can be predicted with some success using coupled oceanic-atmospheric models, the skill of predicting the tropical monsoons is low regardless of the methods applied. The low skill of monsoon prediction may be either because the monsoons are not defined appropriately or because they are not influenced significantly by boundary forcing. The latter characterizes the importance of internal dynamics in monsoon variability and leads to many eminent chaotic features of the monsoons. In this study, we analyze results from nine AMIP-type ensemble experiments with the NASA/GEOS-2 general circulation model to assess the potential predictability of the tropical climate system. We will focus on the variability and predictability of tropical monsoon rainfall on seasonal-to-interannual time scales. It is known that the tropical climate is more predictable than its extratropical counterpart. However, predictability is different from one climate subsystem to another within the tropics. It is important to understand the differences among these subsystems in order to increase our skill of seasonal-to-interannual prediction. We assess potential predictability by comparing the magnitude of internal and forced variances as defined by Harzallah and Sadourny (1995). The internal variance measures the spread among the various ensemble members. The forced part of rainfall variance is determined by the magnitude of the ensemble mean rainfall anomaly and by the degree of consistency of the results from the various experiments.
NASA Astrophysics Data System (ADS)
Piecuch, Christopher G.; Landerer, Felix W.; Ponte, Rui M.
2018-05-01
Monthly ocean bottom pressure solutions from the Gravity Recovery and Climate Experiment (GRACE), derived using surface spherical cap mass concentration (MC) blocks and spherical harmonics (SH) basis functions, are compared to tide gauge (TG) monthly averaged sea level data over 2003-2015 to evaluate improved gravimetric data processing methods near the coast. MC solutions can explain ≳ 42% of the monthly variance in TG time series over broad shelf regions and in semi-enclosed marginal seas. MC solutions also generally explain ˜5-32 % more TG data variance than SH estimates. Applying a coastline resolution improvement algorithm in the GRACE data processing leads to ˜ 31% more variance in TG records explained by the MC solution on average compared to not using this algorithm. Synthetic observations sampled from an ocean general circulation model exhibit similar patterns of correspondence between modeled TG and MC time series and differences between MC and SH time series in terms of their relationship with TG time series, suggesting that observational results here are generally consistent with expectations from ocean dynamics. This work demonstrates the improved quality of recent MC solutions compared to earlier SH estimates over the coastal ocean, and suggests that the MC solutions could be a useful tool for understanding contemporary coastal sea level variability and change.
Energy reduction for the spot welding process in the automotive industry
NASA Astrophysics Data System (ADS)
Cullen, J. D.; Athi, N.; Al-Jader, M. A.; Shaw, A.; Al-Shamma'a, A. I.
2007-07-01
When performing spot welding on galvanised metals, higher welding force and current are required than on uncoated steels. This has implications for the energy usage when creating each spot weld, of which there are approximately 4300 in each passenger car. The paper presented is an overview of electrode current selection and its variance over the lifetime of the electrode tip. This also describes the proposed analysis system for the selection of welding parameters for the spot welding process, as the electrode tip wears.
Microprocessor realizations of range rate filters
NASA Technical Reports Server (NTRS)
1979-01-01
The performance of five digital range rate filters is evaluated. A range rate filter receives an input of range data from a radar unit and produces an output of smoothed range data and its estimated derivative range rate. The filters are compared through simulation on an IBM 370. Two of the filter designs are implemented on a 6800 microprocessor-based system. Comparisons are made on the bases of noise variance reduction ratios and convergence times of the filters in response to simulated range signals.
Transport Test Problems for Hybrid Methods Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Numerical Algorithm for Delta of Asian Option
Zhang, Boxiang; Yu, Yang; Wang, Weiguo
2015-01-01
We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271
On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods
NASA Technical Reports Server (NTRS)
Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.
2003-01-01
Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Crow, James F
2008-12-01
Although molecular methods, such as QTL mapping, have revealed a number of loci with large effects, it is still likely that the bulk of quantitative variability is due to multiple factors, each with small effect. Typically, these have a large additive component. Conventional wisdom argues that selection, natural or artificial, uses up additive variance and thus depletes its supply. Over time, the variance should be reduced, and at equilibrium be near zero. This is especially expected for fitness and traits highly correlated with it. Yet, populations typically have a great deal of additive variance, and do not seem to run out of genetic variability even after many generations of directional selection. Long-term selection experiments show that populations continue to retain seemingly undiminished additive variance despite large changes in the mean value. I propose that there are several reasons for this. (i) The environment is continually changing so that what was formerly most fit no longer is. (ii) There is an input of genetic variance from mutation, and sometimes from migration. (iii) As intermediate-frequency alleles increase in frequency towards one, producing less variance (as p --> 1, p(1 - p) --> 0), others that were originally near zero become more common and increase the variance. Thus, a roughly constant variance is maintained. (iv) There is always selection for fitness and for characters closely related to it. To the extent that the trait is heritable, later generations inherit a disproportionate number of genes acting additively on the trait, thus increasing genetic variance. For these reasons a selected population retains its ability to evolve. Of course, genes with large effect are also important. Conspicuous examples are the small number of loci that changed teosinte to maize, and major phylogenetic changes in the animal kingdom. The relative importance of these along with duplications, chromosome rearrangements, horizontal transmission and polyploidy is yet to be determined. It is likely that only a case-by-case analysis will provide the answers. Despite the difficulties that complex interactions cause for evolution in Mendelian populations, such populations nevertheless evolve very well. Longlasting species must have evolved mechanisms for coping with such problems. Since such difficulties do not arise in asexual populations, a comparison of epistatic patterns in closely related sexual and asexual species might provide some important insights.
Energy and variance budgets of a diffusive staircase with implications for heat flux scaling
NASA Astrophysics Data System (ADS)
Hieronymus, M.; Carpenter, J. R.
2016-02-01
Diffusive convection, the mode of double-diffusive convection that occur when both temperature and salinity increase with increasing depth, is commonplace throughout the high latitude oceans and diffusive staircases constitute an important heat transport process in the Arctic Ocean. Heat and buoyancy fluxes through these staircases are often estimated using flux laws deduced either from laboratory experiments, or from simplified energy or variance budgets. We have done direct numerical simulations of double-diffusive convection at a range of Rayleigh numbers and quantified the energy and variance budgets in detail. This allows us to compare the fluxes in our simulations to those derived using known flux laws and to quantify how well the simplified energy and variance budgets approximate the full budgets. The fluxes are found to agree well with earlier estimates at high Rayleigh numbers, but we find large deviations at low Rayleigh numbers. The close ties between the heat and buoyancy fluxes and the budgets of thermal variance and energy have been utilized to derive heat flux scaling laws in the field of thermal convection. The result is the so called GL-theory, which has been found to give accurate heat flux scaling laws in a very wide parameter range. Diffusive convection has many similarities to thermal convection and an extension of the GL-theory to diffusive convection is also presented and its predictions are compared to the results from our numerical simulations.
Wickizer, Thomas M; Franklin, Gary; Fulton-Kehoe, Deborah; Turner, Judith A; Mootz, Robert; Smith-Weller, Terri
2004-01-01
Objective To determine what aspects of patient satisfaction are most important in explaining the variance in patients' overall treatment experience and to evaluate the relationship between treatment experience and subsequent outcomes. Data Sources and Setting Data from a population-based survey of 804 randomly selected injured workers in Washington State filing a workers' compensation claim between November 1999 and February 2000 were combined with insurance claims data indicating whether survey respondents were receiving disability compensation payments for being out of work at 6 or 12 months after claim filing. Study Design We conducted a two-step analysis. In the first step, we tested a multiple linear regression model to assess the relationship of satisfaction measures to patients' overall treatment experience. In the second step, we used logistic regression to assess the relationship of treatment experience to subsequent outcomes. Principal Findings Among injured workers who had ongoing follow-up care after their initial treatment (n=681), satisfaction with interpersonal and technical aspects of care and with care coordination was strongly and positively associated with overall treatment experience (p<0.001). As a group, the satisfaction measures explained 38 percent of the variance in treatment experience after controlling for demographics, satisfaction with medical care prior to injury, job satisfaction, type of injury, and provider type. Injured workers who reported less-favorable treatment experience were 3.54 times as likely (95 percent confidence interval, 1.20–10.95, p=.021) to be receiving time-loss compensation for inability to work due to injury 6 or 12 months after filing a claim, compared to patients whose treatment experience was more positive. PMID:15230925
Ricci-Cabello, Ignacio; Reeves, David; Bell, Brian G; Valderas, Jose M
2017-11-01
To identify patient and family practice characteristics associated with patient-reported experiences of safety problems and harm. Cross-sectional study combining data from the individual postal administration of the validated Patient Reported Experiences and Outcomes of Safety in Primary Care (PREOS-PC) questionnaire to a random sample of patients in family practices (response rate=18.4%) and practice-level data for those practices obtained from NHS Digital. We built linear multilevel multivariate regression models to model the association between patient-level (clinical and sociodemographic) and practice-level (size and case-mix, human resources, indicators of quality and safety of care, and practice safety activation) characteristics, and outcome measures. Practices distributed across five regions in the North, Centre and South of England. 1190 patients registered in 45 practices purposefully sampled (maximal variation in practice size and levels of deprivation). Self-reported safety problems, harm and overall perception of safety. Higher self-reported levels of safety problems were associated with younger age of patients (beta coefficient 0.15) and lower levels of practice safety activation (0.44). Higher self-reported levels of harm were associated with younger age (0.13) and worse self-reported health status (0.23). Lower self-reported healthcare safety was associated with lower levels of practice safety activation (0.40). The fully adjusted models explained 4.5% of the variance in experiences of safety problems, 8.6% of the variance in harm and 4.4% of the variance in perceptions of patient safety. Practices' safety activation levels and patients' age and health status are associated with patient-reported safety outcomes in English family practices. The development of interventions aimed at improving patient safety outcomes would benefit from focusing on the identified groups. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Statistical analysis of microgravity experiment performance using the degrees of success scale
NASA Technical Reports Server (NTRS)
Upshaw, Bernadette; Liou, Ying-Hsin Andrew; Morilak, Daniel P.
1994-01-01
This paper describes an approach to identify factors that significantly influence microgravity experiment performance. Investigators developed the 'degrees of success' scale to provide a numerical representation of success. A degree of success was assigned to 293 microgravity experiments. Experiment information including the degree of success rankings and factors for analysis was compiled into a database. Through an analysis of variance, nine significant factors in microgravity experiment performance were identified. The frequencies of these factors are presented along with the average degree of success at each level. A preliminary discussion of the relationship between the significant factors and the degree of success is presented.
Patterns and predictors of growth in divorced fathers' health status and substance use.
DeGarmo, David S; Reid, John B; Leve, Leslie D; Chamberlain, Patricia; Knutson, John F
2010-03-01
Health status and substance use trajectories are described over 18 months for a county sample of 230 divorced fathers of young children aged 4 to 11. One third of the sample was clinically depressed. Health problems, drinking, and hard drug use were stable over time for the sample, whereas depression, smoking, and marijuana use exhibited overall mean reductions. Variance components revealed significant individual differences in average levels and trajectories for health and substance use outcomes. Controlling for fathers' antisociality, negative life events, and social support, fathering identity predicted reductions in health-related problems and marijuana use. Father involvement reduced drinking and marijuana use. Antisociality was the strongest risk factor for health and substance use outcomes. Implications for application of a generative fathering perspective in practice and preventive interventions are discussed.
Young, Allison; Klossner, Joanne; Docherty, Carrie L; Dodge, Thomas M; Mensch, James M
2013-01-01
Context A better understanding of why students leave an undergraduate athletic training education program (ATEP), as well as why they persist, is critical in determining the future membership of our profession. Objective To better understand how clinical experiences affect student retention in undergraduate ATEPs. Design Survey-based research using a quantitative and qualitative mixed-methods approach. Setting Three-year undergraduate ATEPs across District 4 of the National Athletic Trainers' Association. Patients or Other Participants Seventy-one persistent students and 23 students who left the ATEP prematurely. Data Collection and Analysis Data were collected using a modified version of the Athletic Training Education Program Student Retention Questionnaire. Multivariate analysis of variance was performed on the quantitative data, followed by a univariate analysis of variance on any significant findings. The qualitative data were analyzed through inductive content analysis. Results A difference was identified between the persister and dropout groups (Pillai trace = 0.42, F1,92 = 12.95, P = .01). The follow-up analysis of variance revealed that the persister and dropout groups differed on the anticipatory factors (F1,92 = 4.29, P = .04), clinical integration (F1,92 = 6.99, P = .01), and motivation (F1,92 = 43.12, P = .01) scales. Several themes emerged in the qualitative data, including networks of support, authentic experiential learning, role identity, time commitment, and major or career change. Conclusions A perceived difference exists in how athletic training students are integrated into their clinical experiences between those students who leave an ATEP and those who stay. Educators may improve retention by emphasizing authentic experiential learning opportunities rather than hours worked, by allowing students to take on more responsibility, and by facilitating networks of support within clinical education experiences. PMID:23672327
Young, Allison; Klossner, Joanne; Docherty, Carrie L; Dodge, Thomas M; Mensch, James M
2013-01-01
A better understanding of why students leave an undergraduate athletic training education program (ATEP), as well as why they persist, is critical in determining the future membership of our profession. To better understand how clinical experiences affect student retention in undergraduate ATEPs. Survey-based research using a quantitative and qualitative mixed-methods approach. Three-year undergraduate ATEPs across District 4 of the National Athletic Trainers' Association. Seventy-one persistent students and 23 students who left the ATEP prematurely. Data were collected using a modified version of the Athletic Training Education Program Student Retention Questionnaire. Multivariate analysis of variance was performed on the quantitative data, followed by a univariate analysis of variance on any significant findings. The qualitative data were analyzed through inductive content analysis. A difference was identified between the persister and dropout groups (Pillai trace = 0.42, F(1,92) = 12.95, P = .01). The follow-up analysis of variance revealed that the persister and dropout groups differed on the anticipatory factors (F(1,92) = 4.29, P = .04), clinical integration (F(1,92) = 6.99, P = .01), and motivation (F(1,92) = 43.12, P = .01) scales. Several themes emerged in the qualitative data, including networks of support, authentic experiential learning, role identity, time commitment, and major or career change. A perceived difference exists in how athletic training students are integrated into their clinical experiences between those students who leave an ATEP and those who stay. Educators may improve retention by emphasizing authentic experiential learning opportunities rather than hours worked, by allowing students to take on more responsibility, and by facilitating networks of support within clinical education experiences.
Chen, Han; Sun, Haichun
2017-08-01
The study aims to explore the effects of receiving active videogame (AVG) feedback and playing experience on individuals' moderate-to-vigorous physical activity (MVPA) and perceived enjoyment. This was a within-subject design study. The participants included 36 (n = 15 and 21 for boys and girls, respectively) fourth graders enrolled in a rural elementary school in southern Georgia area. The experiment lasted for 6 weeks with each week including three sessions. The participants were assigned in either front row (sensor feedback) or back row (no sensor feedback) during practice, which was alternated in different sessions. Two different dance games were played during the study with each game implemented for 3 weeks. The MVPA was measured with GT3X+ accelerometers. Physical activity (PA) enjoyment was assessed after the completion of the first two and last two sessions of each game. A repeated one-way ANOVA (analysis of variance) was used to examine the effects of AVG feedback and game on MVPA. A repeated one-way MANOVA (multivariate analysis of variance) was conducted for each game to examine the effects of experience and AVG feedback on enjoyment and MVPA. No effects of AVG feedback were found for MVPA or enjoyment (P > 0.05). The effects of experience on MVPA were found for Just Dance Kids 2014 with experience decreased MVPA (P < 0.05). Students who practiced dance AVG without receiving feedback still demonstrated positive affection and accumulated similar MVPA than when practicing while receiving feedback. Experience for certain dance games tends to decrease PA intensity.
EEG-based alpha neurofeedback training for mood enhancement.
Phneah, Swee Wu; Nisar, Humaira
2017-06-01
The aim of this paper is to develop a preliminary neurofeedback system to improve the mood of the subjects using audio signals by enhancing their alpha brainwaves. Assessment of the effect of music on the human subjects is performed using three methods; subjective assessment of mood with the help of a questionnaire, the effect on brain by analysing EEG signals, and the effect on body by physiological assessment. In this study, two experiments have been designed. The first experiment was to determine the short-term effect of music on soothing human subjects, whereas the second experiment was to determine its long-term effect. Two types of music were used in the first experiment, the favourite music selected by the participants and a relaxing music with alpha wave binaural beats. The research findings showed that the relaxing music has a better soothing effect on the participants psychologically and physiologically. However, the one-way analysis of variance (ANOVA) results showed that the short-term soothing effect of both favourite music and relaxing music was not significant in changing the mean alpha absolute power and mean physiological measures (blood pressure and heart rate) at the significance level of 0.05. The second experiment was somewhat similar to an alpha neurofeedback training whereby the participants trained their brains to produce more alpha brainwaves by listening to the relaxing music with alpha wave binaural beats for a duration of 30 min daily. The results showed that the relaxing music has a long-term psychological and physiological effect on soothing the participants, as can be observed from the increase in alpha power and decrease in physiological measures after each session of training. The training was found to be effective in increasing the alpha power significantly [F(2,12) = 11.5458 and p = 0.0016], but no significant reduction in physiological measures was observed at the significance level of 0.05.
Subha, Bakthavachallam; Song, Young Chae; Woo, Jung Hui
2015-09-15
The present study aims to optimize the slow release biostimulant ball (BSB) for bioremediation of contaminated coastal sediment using response surface methodology (RSM). Different bacterial communities were evaluated using a pyrosequencing-based approach in contaminated coastal sediments. The effects of BSB size (1-5cm), distance (1-10cm) and time (1-4months) on changes in chemical oxygen demand (COD) and volatile solid (VS) reduction were determined. Maximum reductions of COD and VS, 89.7% and 78.8%, respectively, were observed at a 3cm ball size, 5.5cm distance and 4months; these values are the optimum conditions for effective treatment of contaminated coastal sediment. Most of the variance in COD and VS (0.9291 and 0.9369, respectively) was explained in our chosen models. BSB is a promising method for COD and VS reduction and enhancement of SRB diversity. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, D.L.; Arbaugh, M.J.; Wakefield, V.A.
1987-08-01
Evidence is presented for a reduction in radial growth of Jeffrey pine in the mixed conifer forest of Sequoia and Kings Canyon National Parks, California. Mean annual radial increment of trees with symptoms of ozone injury was 11% less than trees at sites without ozone injury. Larger diameter trees (>40 cm) and older trees (>100 yr) had greater decreases in growth than smaller and younger trees. Differences in radial growth patterns of injured and uninjured trees were prominent after 1965. Winter precipitation accounted for a large proportion of the variance in growth of all trees, although ozone-stressed trees were moremore » sensitive to interannual variation in precipitation and temperature during recent years. These results corroborates surveys in visible ozone injury to foliage and are the first evidence of forest growth reduction associated with ozone injury in North America outside the Los Angeles basin.« less
Psychometric Properties of the Dietary Salt Reduction Self-Care Behavior Scale.
Srikan, Pratsani; Phillips, Kenneth D
2014-07-01
Valid, reliable, and culturally-specific scales to measure salt reduction self-care behavior in older adults are needed. The purpose of this study was to develop the Dietary Salt Reduction Self-Care Behavior Scale (DSR-SCB) for use in hypertensive older adults with Orem's self-care deficit theory as a base. Exploratory factor analysis, Rasch modeling, and reliability were performed on data from 242 older Thai adults. Nine items loaded on one factor (factor loadings = 0.63 to 0.79) and accounted for 52.28% of the variance (Eigenvalue = 4.71). The Kaiser-Meyer-Olkin method of sampling adequacy was 0.89, and the Bartlett's test showed significance (χ 2 ( df =36 ) = 916.48, p < 0.0001). Infit and outfit mean squares ranged from 0.81 to 1.25, while infit and outfit standardized mean squares were located at ±2. Cronbach's alpha was 0.88. The 9-item DSR-SCB is a short and reliable scale. © The Author(s) 2014.
Estimating stochastic noise using in situ measurements from a linear wavefront slope sensor.
Bharmal, Nazim Ali; Reeves, Andrew P
2016-01-15
It is shown how the solenoidal component of noise from the measurements of a wavefront slope sensor can be utilized to estimate the total noise: specifically, the ensemble noise variance. It is well known that solenoidal noise is orthogonal to the reconstruction of the wavefront under conditions of low scintillation (absence of wavefront vortices). Therefore, it can be retrieved even with a nonzero slope signal present. By explicitly estimating the solenoidal noise from an ensemble of slopes, it can be retrieved for any wavefront sensor configuration. Furthermore, the ensemble variance is demonstrated to be related to the total noise variance via a straightforward relationship. This relationship is revealed via the method of the explicit estimation: it consists of a small, heuristic set of four constants that do not depend on the underlying statistics of the incoming wavefront. These constants seem to apply to all situations-data from a laboratory experiment as well as many configurations of numerical simulation-so the method is concluded to be generic.
OCT Amplitude and Speckle Statistics of Discrete Random Media.
Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J
2017-11-01
Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.
Pieterse, Alex L; Carter, Robert T; Evans, Sarah A; Walter, Rebecca A
2010-07-01
In this study, we examined the association among perceptions of racial and/or ethnic discrimination, racial climate, and trauma-related symptoms among 289 racially diverse college undergraduates. Study measures included the Perceived Stress Scale, the Perceived Ethnic Discrimination Questionnaire, the Posttraumatic Stress Disorder Checklist-Civilian Version, and the Racial Climate Scale. Results of a multivariate analysis of variance (MANOVA) indicated that Asian and Black students reported more frequent experiences of discrimination than did White students. Additionally, the MANOVA indicated that Black students perceived the campus racial climate as being more negative than did White and Asian students. A hierarchical regression analysis showed that when controlling for generic life stress, perceptions of discrimination contributed an additional 10% of variance in trauma-related symptoms for Black students, and racial climate contributed an additional 7% of variance in trauma symptoms for Asian students. (c) 2010 APA, all rights reserved.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
Maynard, Brandy R.; Beaver, Kevin M.; Vaughn, Michael G.; DeLisi, Matthew; Roberts, Gregory
2014-01-01
School disengagement is associated with poor academic achievement, dropout, and risk behaviors such as truancy, delinquency, and substance use. Despite empirical research identifying risk correlates of school disengagement across the ecology, it is unclear from which domain these correlates arise. To redress this issue, the current study used intraclass correlation and DeFries-Fulker analyses to longitudinally decompose variance in three domains of engagement (academic, behavioral, and emotional) using data from the National Longitudinal Study of Adolescent Health. Findings suggest that nonshared environmental factors (that is, environmental contexts and experiences that are unique to each sibling) account for approximately half of the variance in indicators of school disengagement when controlling for genetic influences, and that this variance increases as adolescents grow older and rely less on their immediate family. The present study contributes new evidence on the biosocial underpinnings of school engagement and highlights the importance of interventions targeting factors in the nonshared environment. PMID:25525321
Atmospheric turbulence effects measured along horizontal-path optical retro-reflector links.
Mahon, Rita; Moore, Christopher I; Ferraro, Mike; Rabinovich, William S; Suite, Michele R
2012-09-01
The scintillation measured over close-to-ground retro-reflector links can be substantially enhanced due to the correlations experienced by both the direct and reflected echo beams. Experiments were carried out at China Lake, California, over a variety of ranges. The emphasis in this paper is on presenting the data from the 1.1 km retro-reflecting link that was operated for four consecutive days. The dependence of the measured irradiance flux variance on the solar fluence and on the temperature gradient above the ground is presented. The data are consistent with scintillation minima near sunrise and sunset, rising rapidly during the day and saturating at irradiance flux variances of ~10. Measured irradiance probability distributions of the retro-reflected beam are compared with standard probability density functions. The ratio of the irradiance flux variances on the retro-reflected to the direct, single-pass case is investigated with two data sets, one from a monostatic system and the other using an off-axis receiver system.
The efficacy of planetarium experiences to teach specific science concepts
NASA Astrophysics Data System (ADS)
Palmer, Joel C.
The purpose of this study was to examine the impact of planetarium experiences on students' understanding of phases of the moon and eclipses. This research employed a quasi-experimental design. Students from 12 classes in four different schools all in the same school district participated in the study. A total of 178 students from four teachers participated in the study. Data were collected using a researcher developed pretest and posttest. All students received classroom instruction based on the school district's curriculum. The experimental groups took the posttest after attending a 45-minute planetarium experience titled Moon Witch. The control groups took the posttest before attending the planetarium experience but after receiving an additional 45-minute lesson on phases of the moon and eclipses. The data were analyzed using the Statistical Package for Social Sciences (SPSS). An Analysis of Variance (ANOVA) was run to determine if there was variance among teachers' instructional practices. Since the results indicated there was no significant variance among teachers, the study sample was analyzed as a single group. An Independent Samples t Test for Means was run in SPSS for the study sample and each subgroup. Subgroups were African American, Hispanic, White, Male, Female, and Economically Disadvantaged. The results indicated that there was an improvement on mean gain scores for the experimental group over the control group for all students and each subgroup. The differences in mean gain scores were significantly higher for all students and for the African American, Female, and Economically Disadvantaged subgroups. An Independent Samples t Test for Means was run using SPSS for each of the three different sections of the pretest and posttest. The results indicated that most of the improvement was in Section 3. This section required students to manipulate photos of the phases of moon into correct order. This section required more spatial reasoning than Section 1, multiple-choice, or Section 2, essay. Results indicate that planetarium experiences improved students' understanding of phases of the moon and eclipses. There was evidence that this improvement was facilitated by the planetarium's ability to create visual representations that students would otherwise have to create mentally.
Estimating the Variance of Design Parameters
ERIC Educational Resources Information Center
Hedberg, E. C.; Hedges, L. V.; Kuyper, A. M.
2015-01-01
Randomized experiments are generally considered to provide the strongest basis for causal inferences about cause and effect. Consequently randomized field trials have been increasingly used to evaluate the effects of education interventions, products, and services. Populations of interest in education are often hierarchically structured (such as…
Upper ankle joint space detection on low contrast intraoperative fluoroscopic C-arm projections
NASA Astrophysics Data System (ADS)
Thomas, Sarina; Schnetzke, Marc; Brehler, Michael; Swartman, Benedict; Vetter, Sven; Franke, Jochen; Grützner, Paul A.; Meinzer, Hans-Peter; Nolden, Marco
2017-03-01
Intraoperative mobile C-arm fluoroscopy is widely used for interventional verification in trauma surgery, high flexibility combined with low cost being the main advantages of the method. However, the lack of global device-to- patient orientation is challenging, when comparing the acquired data to other intrapatient datasets. In upper ankle joint fracture reduction accompanied with an unstable syndesmosis, a comparison to the unfractured contralateral site is helpful for verification of the reduction result. To reduce dose and operation time, our approach aims at the comparison of single projections of the unfractured ankle with volumetric images of the reduced fracture. For precise assessment, a pre-alignment of both datasets is a crucial step. We propose a contour extraction pipeline to estimate the joint space location for a prealignment of fluoroscopic C-arm projections containing the upper ankle joint. A quadtree-based hierarchical variance comparison extracts potential feature points and a Hough transform is applied to identify bone shaft lines together with the tibiotalar joint space. By using this information we can define the coarse orientation of the projections independent from the ankle pose during acquisition in order to align those images to the volume of the fractured ankle. The proposed method was evaluated on thirteen cadaveric datasets consisting of 100 projections each with manually adjusted image planes by three trauma surgeons. The results show that the method can be used to detect the joint space orientation. The correlation between angle deviation and anatomical projection direction gives valuable input on the acquisition direction for future clinical experiments.
Liquid Water Cloud Properties During the Polarimeter Definition Experiment (PODEX)
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail D.; Cairns, Brian; Wasilewski, Andrzei P.; Ackerman, Andrew S.; McGill, Matthew J.; Yorks, John E.; Hlavka, Dennis L.; Platnick, Steven; Arnold, George; Van Diedenhoven, Bastiaan;
2015-01-01
We present retrievals of water cloud properties from the measurements made by the Research Scanning Polarimeter (RSP) during the Polarimeter Definition Experiment (PODEX) held between January 14 and February 6, 2013. The RSP was onboard the high-altitude NASA ER-2 aircraft based at NASA Dryden Aircraft Operation Facility in Palmdale, California. The retrieved cloud characteristics include cloud optical thickness, effective radius and variance of cloud droplet size distribution derived using a parameter-fitting technique, as well as the complete droplet size distribution function obtained by means of Rainbow Fourier Transform. Multi-modal size distributions are decomposed into several modes and the respective effective radii and variances are computed. The methodology used to produce the retrieval dataset is illustrated on the examples of a marine stratocumulus deck off California coast and stratus/fog over California's Central Valley. In the latter case the observed bimodal droplet size distributions were attributed to two-layer cloud structure. All retrieval data are available online from NASA GISS website.
Improving Signal Detection using Allan and Theo Variances
NASA Astrophysics Data System (ADS)
Hardy, Andrew; Broering, Mark; Korsch, Wolfgang
2017-09-01
Precision measurements often deal with small signals buried within electronic noise. Extracting these signals can be enhanced through digital signal processing. Improving these techniques provide signal to noise ratios. Studies presently performed at the University of Kentucky are utilizing the electro-optic Kerr effect to understand cell charging effects within ultra-cold neutron storage cells. This work is relevant for the neutron electric dipole moment (nEDM) experiment at Oak Ridge National Laboratory. These investigations, and future investigations in general, will benefit from the illustrated improved analysis techniques. This project will showcase various methods for determining the optimum duration that data should be gathered for. Typically, extending the measuring time of an experimental run reduces the averaged noise. However, experiments also encounter drift due to fluctuations which mitigate the benefits of extended data gathering. Through comparing FFT averaging techniques, along with Allan and Theo variance measurements, quantifiable differences in signal detection will be presented. This research is supported by DOE Grants: DE-FG02-99ER411001, DE-AC05-00OR22725.
Brown, Halley J; Andreason, Hope; Melling, Amy K; Imel, Zac E; Simon, Gregory E
2015-08-01
Retention, or its opposite, dropout, is a common metric of psychotherapy quality, but using it to assess provider performance can be problematic. Differences among providers in numbers of general dropouts, "good" dropouts (patients report positive treatment experiences and outcome), and "bad" dropouts (patients report negative treatment experiences and outcome) were evaluated. Patient records were paired with satisfaction surveys (N=3,054). Binomial mixed-effects models were used to examine differences among providers by dropout type. Thirty-four percent of treatment episodes resulted in dropout. Of these, 14% were bad dropouts and 27% were good dropouts. Providers accounted for approximately 17% of the variance in general dropout and 10% of the variance in both bad dropout and good dropout. The ranking of providers fluctuated by type of dropout. Provider assessments based on patient retention should offer a way to isolate dropout type, given that nonspecific metrics may lead to biased estimates of performance.
Fully moderated T-statistic for small sample size gene expression arrays.
Yu, Lianbo; Gulati, Parul; Fernandez, Soledad; Pennell, Michael; Kirschner, Lawrence; Jarjoura, David
2011-09-15
Gene expression microarray experiments with few replications lead to great variability in estimates of gene variances. Several Bayesian methods have been developed to reduce this variability and to increase power. Thus far, moderated t methods assumed a constant coefficient of variation (CV) for the gene variances. We provide evidence against this assumption, and extend the method by allowing the CV to vary with gene expression. Our CV varying method, which we refer to as the fully moderated t-statistic, was compared to three other methods (ordinary t, and two moderated t predecessors). A simulation study and a familiar spike-in data set were used to assess the performance of the testing methods. The results showed that our CV varying method had higher power than the other three methods, identified a greater number of true positives in spike-in data, fit simulated data under varying assumptions very well, and in a real data set better identified higher expressing genes that were consistent with functional pathways associated with the experiments.
Micheyl, Christophe; Dai, Huanping
2010-01-01
The equal-variance Gaussian signal-detection-theory (SDT) decision model for the dual-pair change-detection (or “4IAX”) paradigm has been described in earlier publications. In this note, we consider the equal-variance Gaussian SDT model for the related dual-pair AB vs BA identification paradigm. The likelihood ratios, optimal decision rules, receiver operating characteristics (ROCs), and relationships between d' and proportion-correct (PC) are analyzed for two special cases: that of statistically independent observations, which is likely to apply in constant-stimuli experiments, and that of highly correlated observations, which is likely to apply in experiments where stimuli are roved widely across trials or pairs. A surprising outcome of this analysis is that although these two situations lead to different optimal decision rules, the predicted ROCs and proportions of correct responses (PCs) for these two cases are not substantially different, and are either identical or similar to those observed in the basic Yes-No paradigm. PMID:19633356
Scope of Attention, Control of Attention, and Intelligence in Children and Adults
Cowan, Nelson; Fristoe, Nathanael M.; Elliott, Emily M.; Brunner, Ryan P.; Saults, J. Scott
2006-01-01
Recent experimentation has shown that cognitive aptitude measures are predicted by tests of the scope of an individual’s attention or capacity in simple working-memory tasks, and also by the ability to control attention. However, these experiments do not indicate how separate or related the scope and control of attention are. An experiment with 52 children 10 to 11 years old and 52 college students included measures of the scope and control of attention as well as verbal and nonverbal aptitude measures. The children showed little evidence of using sophisticated attentional control, but the scope of attention predicted intelligence in that group. In adults, the scope and control of attention both varied among individuals, and they accounted for considerable individual variance in intelligence. About 1/3 that variance was shared between scope and control, the rest being unique to one or the other. Scope and control of attention appear to be related but distinct contributors to intelligence. PMID:17489300
Günther, Andreas; Baumann, Klaus; Frick, Eckhard; Jacobs, Christoph
2013-01-01
Spirituality/religiosity is recognized as a resource to cope with burdening life events and chronic illness. However, less is known about the consequences of the lack of positive spiritual feelings. Spiritual dryness in clergy has been described as spiritual lethargy, a lack of vibrant spiritual encounter with God, and an absence of spiritual resources, such as spiritual renewal practices. To operationalize experiences of “spiritual dryness” in terms of a specific spiritual crisis, we have developed the “spiritual dryness scale” (SDS). Here, we describe the validation of the instrument which was applied among other standardized questionnaires in a sample of 425 Catholic priests who professionally care for the spiritual sake of others. Feelings of “spiritual dryness” were experienced occasionally by up to 40%, often or even regularly by up to 13%. These experiences can explain 44% of variance in daily spiritual experiences, 30% in depressive symptoms, 22% in perceived stress, 20% in emotional exhaustion, 19% in work engagement, and 21% of variance of ascribed importance of religious activity. The SDS-5 can be used as a specific measure of spiritual crisis with good reliability and validity in further studies. PMID:23843867
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
Butera, Katie A; George, Steven Z; Borsa, Paul A; Dover, Geoffrey C
2018-03-05
Transcutaneous electrical nerve stimulation (TENS) is commonly used for reducing musculoskeletal pain to improve function. However, peripheral nerve stimulation using TENS can alter muscle motor output. Few studies examine motor outcomes following TENS in a human pain model. Therefore, this study investigated the influence of TENS sensory stimulation primarily on motor output (strength) and secondarily on pain and disability following exercise-induced delayed-onset muscle soreness (DOMS). Thirty-six participants were randomized to a TENS treatment, TENS placebo, or control group after completing a standardized DOMS protocol. Measures included shoulder strength, pain, mechanical pain sensitivity, and disability. TENS treatment and TENS placebo groups received 90 minutes of active or sham treatment 24, 48, and 72 hours post-DOMS. All participants were assessed daily. A repeated measures analysis of variance and post-hoc analysis indicated that, compared to the control group, strength remained reduced in the TENS treatment group (48 hours post-DOMS, P < 0.05) and TENS placebo group (48 hours post-DOMS, P < 0.05; 72 hours post-DOMS, P < 0.05). A mixed-linear modeling analysis was conducted to examine the strength (motor) change. Randomization group explained 5.6% of between-subject strength variance (P < 0.05). Independent of randomization group, pain explained 8.9% of within-subject strength variance and disability explained 3.3% of between-subject strength variance (both P < 0.05). While active and placebo TENS resulted in prolonged strength inhibition, the results were nonsignificant for pain. Results indicated that higher pain and higher disability were independently related to decreased strength. Regardless of the impact on pain, TENS, or even the perception of TENS, may act as a nocebo for motor output. © 2018 World Institute of Pain.
Sex-specific selection under environmental stress in seed beetles.
Martinossi-Allibert, I; Arnqvist, G; Berger, D
2017-01-01
Sexual selection can increase rates of adaptation by imposing strong selection in males, thereby allowing efficient purging of the mutation load on population fitness at a low demographic cost. Indeed, sexual selection tends to be male-biased throughout the animal kingdom, but little empirical work has explored the ecological sensitivity of this sex difference. In this study, we generated theoretical predictions of sex-specific strengths of selection, environmental sensitivities and genotype-by-environment interactions and tested them in seed beetles by manipulating either larval host plant or rearing temperature. Using fourteen isofemale lines, we measured sex-specific reductions in fitness components, genotype-by-environment interactions and the strength of selection (variance in fitness) in the juvenile and adult stage. As predicted, variance in fitness increased with stress, was consistently greater in males than females for adult reproductive success (implying strong sexual selection), but was similar in the sexes in terms of juvenile survival across all levels of stress. Although genetic variance in fitness increased in magnitude under severe stress, heritability decreased and particularly so in males. Moreover, genotype-by-environment interactions for fitness were common but specific to the type of stress, sex and life stage, suggesting that new environments may change the relative alignment and strength of selection in males and females. Our study thus exemplifies how environmental stress can influence the relative forces of natural and sexual selection, as well as concomitant changes in genetic variance in fitness, which are predicted to have consequences for rates of adaptation in sexual populations. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
Ceccanti, Stefano; Giampieri, Simona; Burgalassi, Susi
2011-01-01
The aim of the present investigation was to evaluate the microbial efficacy against highly resistant bacterial spores on different substrates using the lowest effective concentration of a market liquid sporicide based on peracetic acid. The validation was carried out following modified European regulatory agencies procedures or test methods and USP guidelines, employing carriers of materials usually treated with the sporicidal solution and present in grade A cleanrooms and spores of four different microorganisms: Bacillus subtilis and Clostridium sporogenes, both from the ATCC collection, and Bacillus cereus and Bacillus sphaericus as environmental isolates. A statistical evaluation of data was made to estimate the variance for different study conditions. The experiments highlighted that 70% suitable dilution of the ready-to-use peracetic acid solution was effective in both clean and dirty conditions, showing at least 2 log spore reduction after treatment. To obtain effective sporicidal action on the surfaces in cleanrooms it is sufficient to use a sporicidal solution with a ready-to-use concentration of 70% while ensuring a contact time of 10 min. In any case, the reduction of sporicide concentration ensures a high degree of disinfection and provides a consumption savings. Wide-spectrum disinfectants are used in the pharmaceutical industry for the decontamination of work surfaces and equipment, but these products have some degree of toxicity for operators. This work arises from the needs of pharmaceutical companies to find the lowest effective concentration of sanitizers in order to reduce toxicity to personnel. The sanitizer used in the study was a market liquid sporicide based on peracetic acid. When we started our work no similar studies were reported in the literature, so we took European regulatory agencies and USP guidelines as a starting point, employing carriers of hard, non-porous materials usually treated with the sporicidal solution and present in sterile rooms and spores of four different microorganisms. The experiments highlighted that it is sufficient to use a 70% sporicidal solution concentration with a contact time of 10 min to reduce the number of spores to acceptable values for medicinal production. The reduction of sporicide concentration both ensures a high degree of disinfection and provides a safer working environment and consumption savings.
Large contribution of natural aerosols to uncertainty in indirect forcing
NASA Astrophysics Data System (ADS)
Carslaw, K. S.; Lee, L. A.; Reddington, C. L.; Pringle, K. J.; Rap, A.; Forster, P. M.; Mann, G. W.; Spracklen, D. V.; Woodhouse, M. T.; Regayre, L. A.; Pierce, J. R.
2013-11-01
The effect of anthropogenic aerosols on cloud droplet concentrations and radiative properties is the source of one of the largest uncertainties in the radiative forcing of climate over the industrial period. This uncertainty affects our ability to estimate how sensitive the climate is to greenhouse gas emissions. Here we perform a sensitivity analysis on a global model to quantify the uncertainty in cloud radiative forcing over the industrial period caused by uncertainties in aerosol emissions and processes. Our results show that 45 per cent of the variance of aerosol forcing since about 1750 arises from uncertainties in natural emissions of volcanic sulphur dioxide, marine dimethylsulphide, biogenic volatile organic carbon, biomass burning and sea spray. Only 34 per cent of the variance is associated with anthropogenic emissions. The results point to the importance of understanding pristine pre-industrial-like environments, with natural aerosols only, and suggest that improved measurements and evaluation of simulated aerosols in polluted present-day conditions will not necessarily result in commensurate reductions in the uncertainty of forcing estimates.
NASA Astrophysics Data System (ADS)
Demirkaya, Omer
2001-07-01
This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.
Wisaijohn, Thunthita; Pimkhaokham, Atiphan; Lapying, Phenkhae; Itthichaisri, Chumpot; Pannarunothai, Supasit; Igarashi, Isao; Kawabuchi, Koichi
2010-01-01
This study aimed to develop a new casemix classification system as an alternative method for the budget allocation of oral healthcare service (OHCS). Initially, the International Statistical of Diseases and Related Health Problem, 10th revision, Thai Modification (ICD-10-TM) related to OHCS was used for developing the software “Grouper”. This model was designed to allow the translation of dental procedures into eight-digit codes. Multiple regression analysis was used to analyze the relationship between the factors used for developing the model and the resource consumption. Furthermore, the coefficient of variance, reduction in variance, and relative weight (RW) were applied to test the validity. The results demonstrated that 1,624 OHCS classifications, according to the diagnoses and the procedures performed, showed high homogeneity within groups and heterogeneity between groups. Moreover, the RW of the OHCS could be used to predict and control the production costs. In conclusion, this new OHCS casemix classification has a potential use in a global decision making. PMID:20936134
Wisaijohn, Thunthita; Pimkhaokham, Atiphan; Lapying, Phenkhae; Itthichaisri, Chumpot; Pannarunothai, Supasit; Igarashi, Isao; Kawabuchi, Koichi
2010-01-01
This study aimed to develop a new casemix classification system as an alternative method for the budget allocation of oral healthcare service (OHCS). Initially, the International Statistical of Diseases and Related Health Problem, 10th revision, Thai Modification (ICD-10-TM) related to OHCS was used for developing the software "Grouper". This model was designed to allow the translation of dental procedures into eight-digit codes. Multiple regression analysis was used to analyze the relationship between the factors used for developing the model and the resource consumption. Furthermore, the coefficient of variance, reduction in variance, and relative weight (RW) were applied to test the validity. The results demonstrated that 1,624 OHCS classifications, according to the diagnoses and the procedures performed, showed high homogeneity within groups and heterogeneity between groups. Moreover, the RW of the OHCS could be used to predict and control the production costs. In conclusion, this new OHCS casemix classification has a potential use in a global decision making.
NASA Astrophysics Data System (ADS)
Liu, Lu; Hejazi, Mohamad; Li, Hongyi; Forman, Barton; Zhang, Xiao
2017-08-01
Previous modelling studies suggest that thermoelectric power generation is vulnerable to climate change, whereas studies based on historical data suggest the impact will be less severe. Here we explore the vulnerability of thermoelectric power generation in the United States to climate change by coupling an Earth system model with a thermoelectric power generation model, including state-level representation of environmental regulations on thermal effluents. We find that the impact of climate change is lower than in previous modelling estimates due to an inclusion of a spatially disaggregated representation of environmental regulations and provisional variances that temporarily relieve power plants from permit requirements. More specifically, our results indicate that climate change alone may reduce average generating capacity by 2-3% by the 2060s, while reductions of up to 12% are expected if environmental requirements are enforced without waivers for thermal variation. Our work highlights the significance of accounting for legal constructs and underscores the effects of provisional variances in addition to environmental requirements.
Kruppa, B; Rüden, H
1993-05-01
The question was if a reduction of airborne particles and bacteria in conventionally (turbulently), ventilated operating theatres in comparison to Laminar-Airflow (LAF) operating theatres does occur at high air-exchange-rates. Within the framework of energy consumption measures the influence of air-exchange-rates on airborne particle and bacteria concentrations was determined in two identical operating theatres with conventional ventilation (wall diffusor panel) at the air-exchange-rates 7.5, 10, 15 and 20/h without surgical activity. This was established by means of the statistical procedure of analysis of variance. Especially for the comparison of the air-exchange-rates 7.5 and 15/h statistical differences were found for airborne particle concentrations in supply and ambient air. Concerning airborne bacteria concentrations no differences were found among the various air-exchange-rates. Explanation of variance is quite high for non-viable particles (supply air: 37%, ambient air: 81%) but negligible for viable particles (bacteria) with values below 15%.
Measurement of hearing aid internal noise1
Lewis, James D.; Goodman, Shawn S.; Bentler, Ruth A.
2010-01-01
Hearing aid equivalent input noise (EIN) measures assume the primary source of internal noise to be located prior to amplification and to be constant regardless of input level. EIN will underestimate internal noise in the case that noise is generated following amplification. The present study investigated the internal noise levels of six hearing aids (HAs). Concurrent with HA processing of a speech-like stimulus with both adaptive features (acoustic feedback cancellation, digital noise reduction, microphone directionality) enabled and disabled, internal noise was quantified for various stimulus levels as the variance across repeated trials. Changes in noise level as a function of stimulus level demonstrated that (1) generation of internal noise is not isolated to the microphone, (2) noise may be dependent on input level, and (3) certain adaptive features may contribute to internal noise. Quantifying internal noise as the variance of the output measures allows for noise to be measured under real-world processing conditions, accounts for all sources of noise, and is predictive of internal noise audibility. PMID:20370034
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Predictors of motivation for abstinence at the end of outpatient substance abuse treatment
Laudet, Alexandre B.; Stanick, Virginia
2010-01-01
Commitment to abstinence, a motivational construct, is a strong predictor of reductions in drug and alcohol use. Level of commitment to abstinence at treatment end predicts sustained abstinence, a requirement for recovery. This study sought to identify predictors of commitment to abstinence at treatment end to guide clinical practice and to inform the conceptualization of motivational constructs. Polysubstance users (N = 250) recruited at the start of outpatient treatment were re-interviewed at the end of services. Based on the extant literature, potential predictors were during treatment measures of substance use and related cognitions, psychological functioning, recovery supports, stress, quality of life satisfaction, and treatment experiences. In multivariate analyses, perceived harm of future drug use, abstinence self-efficacy, quality of life satisfaction, and number of network members in 12-step recovery contributed 26.6% of the variance explained in the dependent variable, a total of 49.6% when combined with the control variables (demographics and baseline level of the outcome). Gender subgroup analyses yielded largely similar results. Clinical implications of findings for maximizing commitment to abstinence when clients leave treatment are discussed as are future research directions. PMID:20185267
Does employee safety influence customer satisfaction? Evidence from the electric utility industry.
Willis, P Geoffrey; Brown, Karen A; Prussia, Gregory E
2012-12-01
Research on workplace safety has not examined implications for business performance outcomes such as customer satisfaction. In a U.S. electric utility company, we surveyed 821 employees in 20 work groups, and also had access to archival safety data and the results of a customer satisfaction survey (n=341). In geographically-based work units where there were more employee injuries (based on archival records), customers were less satisfied with the service they received. Safety climate, mediated by safety citizenship behaviors (SCBs), added to the predictive power of the group-level model, but these two constructs exerted their influence independently from actual injuries. In combination, two safety-related predictor paths (injuries and climate/SCB) explained 53% of the variance in customer satisfaction. Results offer preliminary evidence that workplace safety influences customer satisfaction, suggesting that there are likely spillover effects between the safety environment and the service environment. Additional research will be needed to assess the specific mechanisms that convert employee injuries into palpable results for customers. Better safety climate and reductions in employee injuries have the potential to offer payoffs in terms of what customers experience. Copyright © 2012 National Safety Council and Elsevier Ltd. All rights reserved.
Miranda, Carolina C B O; Dekker, Robert F H; Serpeloni, Juliana M; Fonseca, Eveline A I; Cólus, Ilce M S; Barbosa, Aneli M
2008-03-01
Biopolymers such as exopolysaccharides (EPS) are produced by microbial species and possess unusual properties known to modify biological responses, among them are antimutagenicity and immunomodulation. Botryosphaeran, a newly described fungal (1-->3; 1-->6)-beta-d-glucan produced by Botryosphaeria rhodina MAMB-05, was administered by gavage to mice at three doses (7.5, 15 and 30mg/kgb.w.per day) over 15 days, and found to be non-genotoxic by the micronucleus test in peripheral blood and bone marrow. Botryosphaeran administered at doses of 15 and 30mg EPS/kgb.w. decreased significantly (p<0.001) the clastogenic effect of cyclophosphamide-induced micronucleus formation resulting in a reduction of the frequency of micronucleated cells of 78 and 82% in polychromatic erythrocytes of bone marrow, and reticulocytes in peripheral blood, respectively. The protective effect was dose-dependent, and strong anticlastogenic activity was exerted at low EPS doses. Variance analysis (ANOVA) showed no significant differences (p<0.05) among the median body weights of the groups of mice treated with botryosphaeran during experiments evaluating genotoxic and protective activities of botryosphaeran. This is the first report on the biological activity attributed to botryosphaeran.
Musical Agency during Physical Exercise Decreases Pain.
Fritz, Thomas H; Bowling, Daniel L; Contier, Oliver; Grant, Joshua; Schneider, Lydia; Lederer, Annette; Höer, Felicia; Busch, Eric; Villringer, Arno
2017-01-01
Objectives: When physical exercise is systematically coupled to music production, exercisers experience improvements in mood, reductions in perceived effort, and enhanced muscular efficiency. The physiology underlying these positive effects remains unknown. Here we approached the investigation of how such musical agency may stimulate the release of endogenous opioids indirectly with a pain threshold paradigm. Design: In a cross-over design we tested the opioid-hypothesis with an indirect measure, comparing the pain tolerance of 22 participants following exercise with or without musical agency. Method: Physical exercise was coupled to music by integrating weight-training machines with sensors that control music-synthesis in real time. Pain tolerance was measured as withdrawal time in a cold pressor test. Results: On average, participants tolerated cold pain for ~5 s longer following exercise sessions with musical agency. Musical agency explained 25% of the variance in cold pressor test withdrawal times after factoring out individual differences in general pain sensitivity. Conclusions: This result demonstrates a substantial pain reducing effect of musical agency in combination with physical exercise, probably due to stimulation of endogenous opioid mechanisms. This has implications for exercise endurance, both in sports and a multitude of rehabilitative therapies in which physical exercise is effective but painful.
Musical Agency during Physical Exercise Decreases Pain
Fritz, Thomas H.; Bowling, Daniel L.; Contier, Oliver; Grant, Joshua; Schneider, Lydia; Lederer, Annette; Höer, Felicia; Busch, Eric; Villringer, Arno
2018-01-01
Objectives: When physical exercise is systematically coupled to music production, exercisers experience improvements in mood, reductions in perceived effort, and enhanced muscular efficiency. The physiology underlying these positive effects remains unknown. Here we approached the investigation of how such musical agency may stimulate the release of endogenous opioids indirectly with a pain threshold paradigm. Design: In a cross-over design we tested the opioid-hypothesis with an indirect measure, comparing the pain tolerance of 22 participants following exercise with or without musical agency. Method: Physical exercise was coupled to music by integrating weight-training machines with sensors that control music-synthesis in real time. Pain tolerance was measured as withdrawal time in a cold pressor test. Results: On average, participants tolerated cold pain for ~5 s longer following exercise sessions with musical agency. Musical agency explained 25% of the variance in cold pressor test withdrawal times after factoring out individual differences in general pain sensitivity. Conclusions: This result demonstrates a substantial pain reducing effect of musical agency in combination with physical exercise, probably due to stimulation of endogenous opioid mechanisms. This has implications for exercise endurance, both in sports and a multitude of rehabilitative therapies in which physical exercise is effective but painful. PMID:29387030
Motion adaptive Kalman filter for super-resolution
NASA Astrophysics Data System (ADS)
Richter, Martin; Nasse, Fabian; Schröder, Hartmut
2011-01-01
Superresolution is a sophisticated strategy to enhance image quality of both low and high resolution video, performing tasks like artifact reduction, scaling and sharpness enhancement in one algorithm, all of them reconstructing high frequency components (above Nyquist frequency) in some way. Especially recursive superresolution algorithms can fulfill high quality aspects because they control the video output using a feed-back loop and adapt the result in the next iteration. In addition to excellent output quality, temporal recursive methods are very hardware efficient and therefore even attractive for real-time video processing. A very promising approach is the utilization of Kalman filters as proposed by Farsiu et al. Reliable motion estimation is crucial for the performance of superresolution. Therefore, robust global motion models are mainly used, but this also limits the application of superresolution algorithm. Thus, handling sequences with complex object motion is essential for a wider field of application. Hence, this paper proposes improvements by extending the Kalman filter approach using motion adaptive variance estimation and segmentation techniques. Experiments confirm the potential of our proposal for ideal and real video sequences with complex motion and further compare its performance to state-of-the-art methods like trainable filters.
Sylvatic plague reduces genetic variability in black-tailed prairie dogs.
Trudeau, Kristie M; Britten, Hugh B; Restani, Marco
2004-04-01
Small, isolated populations are vulnerable to loss of genetic diversity through in-breeding and genetic drift. Sylvatic plague due to infection by the bacterium Yersinia pestis caused an epizootic in the early 1990s resullting in declines and extirpations of many black-tailed prairie dog (Cynomys ludovicianus) colonies in north-central Montana, USA. Plague-induced population bottlenecks may contribute to significant reductions in genetic variability. In contrast, gene flow maintains genetic variability within colonies. We investigated the impacts of the plague epizootic and distance to nearest colony on levels of genetic variability in six prairie dog colonies sampled between June 1999 and July 2001 using 24 variable randomly amplified polymorphic DNA (RAPD) markers. Number of effective alleles per locus (n(e)) and gene diversity (h) were significantly decreased in the three colonies affected by plague that were recovering from the resulting bottlenecks compared with the three colonies that did not experience plague. Genetic variability was not significantly affected by geographic distance between colonies. The majority of variance in gene fieqnencies was found within prairie clog colonies. Conservation of genetic variability in black-tailed prairie dogs will require the preservation of both large and small colony complexes and the gene flow amonog them.
The Comstar D/3 gain degradation experiment
NASA Technical Reports Server (NTRS)
Lee, T. C.; Hodge, D. B.
1981-01-01
The results of gain degradation measurements using the Comstar D/3 19.04 GHz beacon are reported. This experiment utilized 0.6 and 5 m aperture antennas aligned along the same propagation path to examine propagation effects which are related to the antenna aperture size. Sample data for clear air, scintillation in clear air, and precipitation fading are presented. Distributions of the received signal levels and variances for both antennas are also presented.
LaMothe, Jeremy; Baxter, Josh R; Gilbert, Susannah; Murphy, Conor I; Karnovsky, Sydney C; Drakos, Mark C
2017-06-01
Syndesmotic injuries can be associated with poor patient outcomes and posttraumatic ankle arthritis, particularly in the case of malreduction. However, ankle joint contact mechanics following a syndesmotic injury and reduction remains poorly understood. The purpose of this study was to characterize the effects of a syndesmotic injury and reduction techniques on ankle joint contact mechanics in a biomechanical model. Ten cadaveric whole lower leg specimens with undisturbed proximal tibiofibular joints were prepared and tested in this study. Contact area, contact force, and peak contact pressure were measured in the ankle joint during simulated standing in the intact, injured, and 3 reduction conditions: screw fixation with a clamp, screw fixation without a clamp (thumb technique), and a suture-button construct. Differences in these ankle contact parameters were detected between conditions using repeated-measures analysis of variance. Syndesmotic disruption decreased tibial plafond contact area and force. Syndesmotic reduction did not restore ankle loading mechanics to values measured in the intact condition. Reduction with the thumb technique was able to restore significantly more joint contact area and force than the reduction clamp or suture-button construct. Syndesmotic disruption decreased joint contact area and force. Although the thumb technique performed significantly better than the reduction clamp and suture-button construct, syndesmotic reduction did not restore contact mechanics to intact levels. Decreased contact area and force with disruption imply that other structures are likely receiving more loads (eg, medial and lateral gutters), which may have clinical implications such as the development of posttraumatic arthritis.
Causes of Long-Term Drought in the United States Great Plains
NASA Technical Reports Server (NTRS)
Schubert, Siegfried D.; Suarez, Max J.; Pegion, Philip J.; Koster, Randal D.; Bacmeister, Julio T.
2003-01-01
This study examines the causes of long term droughts in the United States Great Plains (USGP). The focus is on the relative roles of slowly varying SSTs and interactions with soil moisture. The results from ensembles of long term (1930-1999) simulations carried out with the NASA Seasonal-to- Interannual Prediction Project (NSIPP-1) atmospheric general circulation model (AGCM) show that the SSTs account for about 1/3 of the total low frequency rainfall variance in the USGP. Results from idealized experiments with climatological SST suggest that the remaining low frequency variance in the USGP precipitation is the result of interactions with soil moisture. In particular, simulations with soil moisture feedback show a five-fold increase in the variance in annual USGP precipitation compared with simulations in which the soil feedback is excluded. In addition to increasing variance, the interactions with the soil introduce year-to-year memory in the hydrological cycle that is consistent with a red noise process, in which the deep soil is forced by white noise and damped with a time scale of about 2 years. As such, the role of low frequency SST variability is to introduce a bias to the net forcing on the soil moisture that drives the random process preferentially to either wet or dry conditions.
Guidelines for the design and statistical analysis of experiments in papers submitted to ATLA.
Festing, M F
2001-01-01
In vitro experiments need to be well designed and correctly analysed if they are to achieve their full potential to replace the use of animals in research. An "experiment" is a procedure for collecting scientific data in order to answer a hypothesis, or to provide material for generating new hypotheses, and differs from a survey because the scientist has control over the treatments that can be applied. Most experiments can be classified into one of a few formal designs, the most common being completely randomised, and randomised block designs. These are quite common with in vitro experiments, which are often replicated in time. Some experiments involve a single independent (treatment) variable, while other "factorial" designs simultaneously vary two or more independent variables, such as drug treatment and cell line. Factorial designs often provide additional information at little extra cost. Experiments need to be carefully planned to avoid bias, be powerful yet simple, provide for a valid statistical analysis and, in some cases, have a wide range of applicability. Virtually all experiments need some sort of statistical analysis in order to take account of biological variation among the experimental subjects. Parametric methods using the t test or analysis of variance are usually more powerful than non-parametric methods, provided the underlying assumptions of normality of the residuals and equal variances are approximately valid. The statistical analyses of data from a completely randomised design, and from a randomised-block design are demonstrated in Appendices 1 and 2, and methods of determining sample size are discussed in Appendix 3. Appendix 4 gives a checklist for authors submitting papers to ATLA.
Turbulent Intensities and Velocity Spectra for Bare and Forested Gentle Hills: Flume Experiments
NASA Astrophysics Data System (ADS)
Poggi, Davide; Katul, Gabriel G.
2008-10-01
To investigate how velocity variances and spectra are modified by the simultaneous action of topography and canopy, two flume experiments were carried out on a train of gentle cosine hills differing in surface cover. The first experiment was conducted above a bare surface while the second experiment was conducted within and above a densely arrayed rod canopy. The velocity variances and spectra from these two experiments were compared in the middle, inner, and near-surface layers. In the middle layer, and for the canopy surface, longitudinal and vertical velocity variances ({σ_u^2,σ_w^2}) were in phase with the hill-induced spatial mean velocity perturbation (Δ u) around the so-called background state (taken here as the longitudinal mean at a given height) as predicted by rapid distortion theory (RDT). However, for the bare surface case, {σ_u^2 } and {σ_w^2 } remained out of phase with Δ u by about L/2, where L is the hill half-length. In the canopy layer, wake production was a significant source of turbulent energy for {σ_w^2 } , and its action was to re-align velocity variances with Δ u in those layers, a mechanism completely absent for the bare surface case. Such a lower ‘boundary condition’ resulted in longitudinal variations of {σ_w^2} to be nearly in phase with Δ u above the canopy surface. In the inner and middle layers, the spectral distortions by the hill remained significant for the background state of the bare surface case but not for the canopy surface case. In particular, in the inner and middle layers of the bare surface case, the effective exponents derived from the locally measured power spectra diverged from their expected - 5/3 value for inertial subrange scales. These departures spatially correlated with the hill surface. However, for the canopy surface case, the spectral exponents were near - 5/3 above the canopy though the minor differences from - 5/3 were also correlated with the hill surface. Inside the canopy, wake production and energy short-circuiting resulted in significant departures from - 5/3. These departures from - 5/3 also appeared correlated with the hill surface through the wake production contribution and its alignment with Δ u. Moreover, scales commensurate with Von Karman street vorticies well described wake production scales inside the canopy, confirming the important role of the mean flow in producing wakes. The spectra inside the canopy on the lee side of the hill, where a negative mean flow delineated a recirculation zone, suggested that the wake production scales there were ‘broader’ when compared to their counterpart outside the recirculation zone. Inside the recirculation zone, there was significantly more energy at higher frequencies when compared to regions outside the recirculation zone.
Kimme-Smith, C; Rothschild, P A; Bassett, L W; Gold, R H; Moler, C
1989-01-01
Six different combinations of film-processor temperature (33.3 degrees C, 35 degrees C), development time (22 sec, 44 sec), and chemistry (Du Pont medium contrast developer [MCD] and Kodak rapid process [RP] developer) were each evaluated by separate analyses with Hurter and Driffield curves, test images of plastic step wedges, noise variance analysis, and phantom images; each combination also was evaluated clinically. Du Pont MCD chemistry produced greater contrast than did Kodak RP chemistry. A change in temperature from 33.3 degrees C (92 degrees F) to 35 degrees C (95 degrees F) had the least effect on dose and image contrast. Temperatures of 36.7 degrees C (98 degrees F) and 38.3 degrees C (101 degrees F) also were tested with extended processing. The speed increased for 36.7 degrees C but decreased at 38.3 degrees C. Base plus fog increased, but contrast decreased for these higher temperatures. Increasing development time had the greatest effect on decreasing the dose required for equivalent film darkening when imaging BR12 breast equivalent test objects; ion chamber measurements showed a 32% reduction in dose when the development time was increased from 22 to 44 sec. Although noise variance doubled in images processed with the extended development time, diagnostic capability was not compromised. Extending the processing time for mammographic films was an effective method of dose reduction, whereas varying the processing temperature and chemicals had less effect on contrast and dose.
Jeenah, M; September, W; Graadt van Roggen, F; de Villiers, W; Seftel, H; Marais, D
1993-01-04
Simvastatin, an inhibitor of HMG CoA reductase, lowers the plasma total cholesterol and LDL-cholesterol concentration in familial hypercholesterolemic patients. The efficacy of the drug shows considerable inter-individual variation, however. In this study we have assessed the influence of certain LDL-receptor gene mutations on this variation. A group of 20 male and female heterozygotic familial hypercholesterolemic patients, all Afrikaners and each bearing one of two different LDL receptor gene mutations, FH Afrikaner-1 (FH1) and FH Afrikaner-2 (FH2), was treated with simvastatin (40 mg once daily) for 18 months. The average reduction in total plasma cholesterol was 35.3% in the case of the FH2 men but only 23.2% in that of the FH1 men (P = 0.005); the reduction in LDL cholesterol concentrations was also greater in the FH2 group (39% as opposed to 27.1%, P = 0.02). The better response of the FH2 group was also evident when men and women were considered together. Female FH1 patients responded better to simvastatin treatment, however, than did males with the same gene defect. Mutations at the LDL-receptor locus may thus play a significant role in the variable efficacy of the drug. The particular mutations in the males of this group may have contributed up to 35% of the variance in total cholesterol response and 29% of the variance in LDL-cholesterol response to simvastatin treatment.
McNamee, R L; Eddy, W F
2001-12-01
Analysis of variance (ANOVA) is widely used for the study of experimental data. Here, the reach of this tool is extended to cover the preprocessing of functional magnetic resonance imaging (fMRI) data. This technique, termed visual ANOVA (VANOVA), provides both numerical and pictorial information to aid the user in understanding the effects of various parts of the data analysis. Unlike a formal ANOVA, this method does not depend on the mathematics of orthogonal projections or strictly additive decompositions. An illustrative example is presented and the application of the method to a large number of fMRI experiments is discussed. Copyright 2001 Wiley-Liss, Inc.
Wang, Li-Pen; Ochoa-Rodríguez, Susana; Simões, Nuno Eduardo; Onof, Christian; Maksimović, Cedo
2013-01-01
The applicability of the operational radar and raingauge networks for urban hydrology is insufficient. Radar rainfall estimates provide a good description of the spatiotemporal variability of rainfall; however, their accuracy is in general insufficient. It is therefore necessary to adjust radar measurements using raingauge data, which provide accurate point rainfall information. Several gauge-based radar rainfall adjustment techniques have been developed and mainly applied at coarser spatial and temporal scales; however, their suitability for small-scale urban hydrology is seldom explored. In this paper a review of gauge-based adjustment techniques is first provided. After that, two techniques, respectively based upon the ideas of mean bias reduction and error variance minimisation, were selected and tested using as case study an urban catchment (∼8.65 km(2)) in North-East London. The radar rainfall estimates of four historical events (2010-2012) were adjusted using in situ raingauge estimates and the adjusted rainfall fields were applied to the hydraulic model of the study area. The results show that both techniques can effectively reduce mean bias; however, the technique based upon error variance minimisation can in general better reproduce the spatial and temporal variability of rainfall, which proved to have a significant impact on the subsequent hydraulic outputs. This suggests that error variance minimisation based methods may be more appropriate for urban-scale hydrological applications.
Irreducible Uncertainty in Terrestrial Carbon Projections
NASA Astrophysics Data System (ADS)
Lovenduski, N. S.; Bonan, G. B.
2016-12-01
We quantify and isolate the sources of uncertainty in projections of carbon accumulation by the ocean and terrestrial biosphere over 2006-2100 using output from Earth System Models participating in the 5th Coupled Model Intercomparison Project. We consider three independent sources of uncertainty in our analysis of variance: (1) internal variability, driven by random, internal variations in the climate system, (2) emission scenario, driven by uncertainty in future radiative forcing, and (3) model structure, wherein different models produce different projections given the same emission scenario. Whereas uncertainty in projections of ocean carbon accumulation by 2100 is 100 Pg C and driven primarily by emission scenario, uncertainty in projections of terrestrial carbon accumulation by 2100 is 50% larger than that of the ocean, and driven primarily by model structure. This structural uncertainty is correlated with emission scenario: the variance associated with model structure is an order of magnitude larger under a business-as-usual scenario (RCP8.5) than a mitigation scenario (RCP2.6). In an effort to reduce this structural uncertainty, we apply various model weighting schemes to our analysis of variance in terrestrial carbon accumulation projections. The largest reductions in uncertainty are achieved when giving all the weight to a single model; here the uncertainty is of a similar magnitude to the ocean projections. Such an analysis suggests that this structural uncertainty is irreducible given current terrestrial model development efforts.
Massage and Reiki used to reduce stress and anxiety: Randomized Clinical Trial
Kurebayashi, Leonice Fumiko Sato; Turrini, Ruth Natalia Teresa; de Souza, Talita Pavarini Borges; Takiguchi, Raymond Sehiji; Kuba, Gisele; Nagumo, Marisa Toshi
2016-01-01
ABTRACT Objective: to evaluate the effectiveness of massage and reiki in the reduction of stress and anxiety in clients at the Institute for Integrated and Oriental Therapy in Sao Paulo (Brazil). Method: clinical tests randomly done in parallel with an initial sample of 122 people divided into three groups: Massage + Rest (G1), Massage + Reiki (G2) and a Control group without intervention (G3). The Stress Systems list and the Trace State Anxiety Inventory were used to evaluate the groups at the start and after 8 sessions (1 month), during 2015. Results: there were statistical differences (p = 0.000) according to the ANOVA (Analysis of Variance) for the stress amongst the groups 2 and 3 (p = 0.014) with a 33% reductions and a Cohen of 0.78. In relation to anxiety-state, there was a reduction in the intervention groups compared with the control group (p < 0.01) with a 21% reduction in group 2 (Cohen of 1.18) and a 16% reduction for group 1 (Cohen of 1.14). Conclusion: Massage + Reiki produced better results amongst the groups and the conclusion is for further studies to be done with the use of a placebo group to evaluate the impact of the technique separate from other techniques. RBR-42c8wp PMID:27901219
The Counseling Competencies Scale: Validation and Refinement
ERIC Educational Resources Information Center
Lambie, Glenn W.; Mullen, Patrick R.; Swank, Jacqueline M.; Blount, Ashley
2018-01-01
Supervisors evaluated counselors-in-training at multiple points during their practicum experience using the Counseling Competencies Scale (CCS; N = 1,070). The CCS evaluations were randomly split to conduct exploratory factor analysis and confirmatory factor analysis, resulting in a 2-factor model (61.5% of the variance explained).
ANALYSIS OF THE STRUCTURE OF MAGNETIC FIELDS THAT INDUCED INHIBITION OF STIMULATED NEURITE OUTGROWTH
The important experiments showing nonlinear amplitude dependences of the neurite outgrowth in pheochromocytoma nerve cells due to ELF magnetic field exposure had been carried out in a nonuniform ac magnetic field. The nonuniformity entailed larger than expected variances in magne...
Gender Variance on Campus: A Critical Analysis of Transgender Voices
ERIC Educational Resources Information Center
Mintz, Lee M.
2011-01-01
Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…
Covariate Imbalance and Precision in Measuring Treatment Effects
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2011-01-01
Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…
Off-Campus Student Housing Satisfaction
ERIC Educational Resources Information Center
Delgadillo, Lucy; Erickson, Luke V.
2006-01-01
Results from a survey of 180 students at a western university suggest that apartment manager's responsiveness and fairness explain 50% of the variance in determining student satisfaction with off-campus housing. Variables that measured aspects of the off-campus housing experience included manager fairness, likelihood of renting from the manager…
Occupational Tedium among Prison Officers.
ERIC Educational Resources Information Center
Shamir, Boaz; Drory, Amos
1982-01-01
Studied sources of occupational stress in the prison officer's job and investigated their relationships with tedium (defined as a general experience of physical, emotional, and attitudinal exhaustion). Found the variables making the largest unique contributions to the variance in tedium are role overload, management support, and societal support.…
Franklin, Rebecca A; Butler, Michael P; Bentley, Jacob A
2018-08-01
Yoga contains sub-components related to its physical postures (asana), breathing methods (pranayama), and meditation (dhyana). To test the hypothesis that specific yoga practices are associated with reduced psychological distress, 186 adults completed questionnaires assessing life stressors, symptom severity, and experience with each of these aspects of yoga. Each yoga sub-component was found to be negatively correlated with psychological distress indices. However, differing patterns of relationship to psychological distress symptoms were found for each yoga sub-component. Experience with asana was negatively correlated with global psychological distress (r = -.21, p < .01), and symptoms of anxiety (r = -.18, p = .01) and depression (r = -.17, p = .02). These relationships remained statistically significant after accounting for variance attributable to Social Readjustment Rating Scale scores (GSI: r = -.19, p = .01; BSI Anxiety: r = -.16, p = .04; BSI Depression: r = -.14, p = .05). By contrast, the correlations between other yoga sub-components and symptom subscales became non-significant after accounting for exposure to life stressors. Moreover, stressful life events moderated the predictive relationship between amount of asana experience and depressive symptoms. Asana was not related to depressive symptoms at low levels of life stressors, but became associated at mean (t[182] = -2.73, p < .01) and high levels (t[182] = -3.56, p < .001). Findings suggest asana may possess depressive symptom reduction benefits, particularly as life stressors increase. Additional research is needed to differentiate whether asana has an effect on psychological distress, and to better understand potential psychophysiological mechanisms of action.
Quinn, Diane M.; Williams, Michelle K.; Quintana, Francisco; Gaskins, Jennifer L.; Overstreet, Nicole M.; Pishori, Alefiyah; Earnshaw, Valerie A.; Perez, Giselle; Chaudoir, Stephenie R.
2014-01-01
Understanding how stigmatized identities contribute to increased rates of depression and anxiety is critical to stigma reduction and mental health treatment. There has been little research testing multiple aspects of stigmatized identities simultaneously. In the current study, we collected data from a diverse, urban, adult community sample of people with a concealed stigmatized identity (CSI). We targeted 5 specific CSIs – mental illness, substance abuse, experience of domestic violence, experience of sexual assault, and experience of childhood abuse – that have been shown to put people at risk for increased psychological distress. We collected measures of the anticipation of being devalued by others if the identity became known (anticipated stigma), the level of defining oneself by the stigmatized identity (centrality), the frequency of thinking about the identity (salience), the extent of agreement with negative stereotypes about the identity (internalized stigma), and extent to which other people currently know about the identity (outness). Results showed that greater anticipated stigma, greater identity salience, and lower levels of outness each uniquely and significantly predicted variance in increased psychological distress (a composite of depression and anxiety). In examining communalities and differences across the five identities, we found that mean levels of the stigma variables differed across the identities, with people with substance abuse and mental illness reporting greater anticipated and internalized stigma. However, the prediction pattern of the variables for psychological distress was similar across the substance abuse, mental illness, domestic violence, and childhood abuse identities (but not sexual assault). Understanding which components of stigmatized identities predict distress can lead to more effective treatment for people experiencing psychological distress. PMID:24817189
Long-term oxytocin administration enhances the experience of attachment.
Bernaerts, Sylvie; Prinsen, Jellina; Berra, Emmely; Bosmans, Guy; Steyaert, Jean; Alaerts, Kaat
2017-04-01
The neuropeptide 'oxytocin' (OT) is known to play a pivotal role in a variety of complex social behaviors by promoting a prosocial attitude and interpersonal bonding. Previous studies showed that a single-dose of exogenously administered OT can affect trust and feelings of attachment insecurity. With the present study, we explored the effects of two weeks of daily OT administration on measures of state and trait attachment using a double-blind between-subjects randomized placebo-controlled design. In 40 healthy young adult men state and trait attachment were assessed before and after two weeks of daily intranasal OT (24 IU) or placebo using the State Adult Attachment Scale and the Inventory of Parent and Peer Attachment. Mood, social responsiveness and quality of life were additionally assessed as secondary outcome measures. Reductions in attachment avoidance and increases in reports of attachment toward peers were reported after two weeks of OT treatment. Further, treatment-induced changes were most pronounced for participants with less secure attachment towards their peers. indicating that normal variance at baseline modulated treatment response. OT treatment was additionally associated with changes in mood, indicating decreases in feelings of tension and (tentatively) anger in the OT group, not in the placebo group. Further, at the end of the two-week trial, both treatment groups (OT, placebo) reported to experience an increase in social responsiveness and quality of life, but the effects were only specific to the OT-treatment in terms of reports on 'social motivation'. In summary, the observed improvements on state and trait dimensions of attachment after a multiple-dose treatment with OT provide further evidence in support of a pivotal role of OT in promoting the experience of attachment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ashengroph, Morahem; Ababaf, Sajad
2014-12-01
Microbial caffeine removal is a green solution for treatment of caffeinated products and agro-industrial effluents. We directed this investigation to optimizing a bio-decaffeination process with growing cultures of Pseudomonas pseudoalcaligenes through Taguchi methodology which is a structured statistical approach that can be lowered variations in a process through Design of Experiments (DOE). Five parameters, i.e. initial fructose, tryptone, Zn(+2) ion and caffeine concentrations and also incubation time selected and an L16 orthogonal array was applied to design experiments with four 4-level factors and one 3-level factor (4(4) × 1(3)). Data analysis was performed using the statistical analysis of variance (ANOVA) method. Furthermore, the optimal conditions were determined by combining the optimal levels of the significant factors and verified by a confirming experiment. Measurement of residual caffeine concentration in the reaction mixture was performed using high-performance liquid chromatography (HPLC). Use of Taguchi methodology for optimization of design parameters resulted in about 86.14% reduction of caffeine in 48 h incubation when 5g/l fructose, 3 mM Zn(+2) ion and 4.5 g/l of caffeine are present in the designed media. Under the optimized conditions, the yield of degradation of caffeine (4.5 g/l) by the native strain of Pseudomonas pseudoalcaligenes TPS8 has been increased from 15.8% to 86.14% which is 5.4 fold higher than the normal yield. According to the experimental results, Taguchi methodology provides a powerful methodology for identifying the favorable parameters on caffeine removal using strain TPS8 which suggests that the approach also has potential application with similar strains to improve the yield of caffeine removal from caffeine containing solutions.
Perceptual context and individual differences in the language proficiency of preschool children.
Banai, Karen; Yifat, Rachel
2016-02-01
Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.
Crawford, D C; Bell, D S; Bamber, J C
1993-01-01
A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Nitrogen reduction pathways in estuarine sediments: Influences of organic carbon and sulfide
NASA Astrophysics Data System (ADS)
Plummer, Patrick; Tobias, Craig; Cady, David
2015-10-01
Potential rates of sediment denitrification, anaerobic ammonium oxidation (anammox), and dissimilatory nitrate reduction to ammonium (DNRA) were mapped across the entire Niantic River Estuary, CT, USA, at 100-200 m scale resolution consisting of 60 stations. On the estuary scale, denitrification accounted for ~ 90% of the nitrogen reduction, followed by DNRA and anammox. However, the relative importance of these reactions to each other was not evenly distributed through the estuary. A Nitrogen Retention Index (NIRI) was calculated from the rate data (DNRA/(denitrification + anammox)) as a metric to assess the relative amounts of reactive nitrogen being recycled versus retained in the sediments following reduction. The distribution of rates and accompanying sediment geochemical analytes suggested variable controls on specific reactions, and on the NIRI, depending on position in the estuary and that these controls were linked to organic carbon abundance, organic carbon source, and pore water sulfide concentration. The relationship between NIRI and organic carbon abundance was dependent on organic carbon source. Sulfide proved the single best predictor of NIRI, accounting for 44% of its observed variance throughout the whole estuary. We suggest that as a single metric, sulfide may have utility as a proxy for gauging the distribution of denitrification, anammox, and DNRA.
NASA Astrophysics Data System (ADS)
Yang, L.; Molins, S.; Beller, H. R.; Brodie, E. L.; Steefel, C.; Nico, P. S.; Han, R.
2010-12-01
Microbially mediated Cr(VI) reduction at the Hanford 100H area was investigated by flow-through column experiments. Three separate experiments were conducted to promote microbial activities associated with denitrification, iron and sulfate reduction, respectively. Replicate columns packed with natural sediments from the site under anaerobic environment were injected with 5mM Lactate as the electron donor and 5 μM Cr(VI) in all experiments. Sulfate and nitrate solutions were added to act as the main electron acceptors in the respective experiments, while iron columns relied on the indigenous sediment iron (and manganese) oxides as electron acceptors. Column effluent solutions were analyzed by IC and ICP-MS to monitor the microbial consumption/conversion of lactate and the associated Cr(VI) reduction. Biogeochemical reactive transport modeling was performed to gain further insights into the reaction mechanisms and Cr(VI) bioreduction rates. All experimental columns showed a reduction of the injected Cr(VI). Columns under denitrifying conditions showed the least Cr(VI) reduction at early stages (<60 days) compared to columns run under other experimental conditions, but became more active over time, and ultimately showed the most consistent Cr(VI) reduction. A strong correlation between denitrification and Cr(VI) reduction processes was observed and was in agreement with the results obtained in batch experiments with a denitrifying bacterium isolated from the Hanford site. The accumulation of nitrite does not appear to have an adverse effect on Cr(VI) reduction rates. Reactive transport simulations indicated that biomass growth completely depleted influent ammonium, and called for an additional source of N to account for the measured reduction rates. Iron columns were the least active with undetectable consumption of the injected lactate, slowest cell growth, and the smallest change in Cr(VI) concentrations during the course of the experiment. In contrast, columns under sulfate-reducing/fermentative conditions exhibited the greatest Cr(VI) reduction capacity. Two sulfate columns evolved to complete lactate fermentation with acetate and propionate produced in the column effluent after 40 days of experiments. These fermenting columns showed a complete removal of injected Cr(VI), visible precipitation of sulfide minerals, and a significant increase in effluent Fe and Mn concentrations. Reactive transport simulations suggested that direct reduction of Cr(VI) by Fe(II) and Mn(II) released from the sediment could account for the observed Cr(VI) removal. The biogeochemical modeling was employed to test two hypotheses that could explain the release of Fe(II) and Mn(II) from the column sediments: 1) acetate produced by lactate fermentation provided the substrate for the growth of iron(III) and manganese(IV) oxide reducers, and 2) direct reduction of iron(III) and manganese(IV) oxides by hydrogen sulfide generated during sulfate reduction. Overall, experimental and modeling results suggested that Cr(VI) reduction in the sulfate-reducing columns occurred through a complex network of microbial reactions that included fermentation, sulfate reduction, and possibly the stimulated iron-reducing communities.
Pitchers, W. R.; Brooks, R.; Jennions, M. D.; Tregenza, T.; Dworkin, I.; Hunt, J.
2013-01-01
Phenotypic integration and plasticity are central to our understanding of how complex phenotypic traits evolve. Evolutionary change in complex quantitative traits can be predicted using the multivariate breeders’ equation, but such predictions are only accurate if the matrices involved are stable over evolutionary time. Recent work, however, suggests that these matrices are temporally plastic, spatially variable and themselves evolvable. The data available on phenotypic variance-covariance matrix (P) stability is sparse, and largely focused on morphological traits. Here we compared P for the structure of the complex sexual advertisement call of six divergent allopatric populations of the Australian black field cricket, Teleogryllus commodus. We measured a subset of calls from wild-caught crickets from each of the populations and then a second subset after rearing crickets under common-garden conditions for three generations. In a second experiment, crickets from each population were reared in the laboratory on high- and low-nutrient diets and their calls recorded. In both experiments, we estimated P for call traits and used multiple methods to compare them statistically (Flury hierarchy, geometric subspace comparisons and random skewers). Despite considerable variation in means and variances of individual call traits, the structure of P was largely conserved among populations, across generations and between our rearing diets. Our finding that P remains largely stable, among populations and between environmental conditions, suggests that selection has preserved the structure of call traits in order that they can function as an integrated unit. PMID:23530814