Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
Chiba, G. Tsuji, M.; Narabayashi, T.
2015-01-15
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Hybrid biasing approaches for global variance reduction.
Wu, Zeyun; Abdel-Khalik, Hany S
2013-02-01
A new variant of Monte Carlo-deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques
NASA Astrophysics Data System (ADS)
Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.
Replicative Use of an External Model in Simulation Variance Reduction
1996-03-01
measures used are confidence interval width reduction, realized coverage, and estimated Mean Square Error. Results of this study indicate analytical...control variates achieve comparable confidence interval width reduction with internal and external control variates. However, the analytical control
1991-03-01
Adjusted Estimators for Variance 1Redutilol in Computer Simutlation by Riichiardl L. R’ r March, 1991 D~issertation Advisor: Peter A.W. Lewis Approved for...OF NONLINEAR CONTROLS AND REGRESSION-ADJUSTED ESTIMATORS FOR VARIANCE REDUCTION IN COMPUTER SIMULATION 12. Personal Author(s) Richard L. Ressler 13a...necessary and identify by block number) This dissertation develops new techniques for variance reduction in computer simulation. It demonstrates that
NASA Astrophysics Data System (ADS)
Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki
2016-11-01
Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.
Clarke, Peter; Varghese, Philip; Goldstein, David
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C; Murphy, Brian D; Mueller, Don
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.
ADVANTG 3.0.1: AutomateD VAriaNce reducTion Generator
2015-08-17
Version 00 ADVANTG is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 V1.60 (CCC-810, not included in this distribution) based on approximate 3-D multigroup discrete ordinates adjoint transport solutions generated by Denovo (included in this distribution). The variance reduction parameters generated by ADVANTG consist of space and energy-dependent weight-window bounds and biased source distributions, which are output in formats that can be directly used with unmodified versions of MCNP5. ADVANTG has been applied to neutron, photon, and coupled neutron-photon simulations of real-world radiation detection and shielding scenarios. ADVANTG is compatible with all MCNP5 geometry features and can be used to accelerate cell tallies (F4, F6, F8), surface tallies (F1 and F2), point-detector tallies (F5), and Cartesian mesh tallies (FMESH).
Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.
2015-09-15
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Vidal-Codina, F.; Nguyen, N. C.; Giles, M. B.; Peraire, J.
2015-09-01
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.
Importance sampling variance reduction for the Fokker-Planck rarefied gas particle method
NASA Astrophysics Data System (ADS)
Collyer, B. S.; Connaughton, C.; Lockerby, D. A.
2016-11-01
The Fokker-Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find that our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick
2015-08-15
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.
MCNPX--PoliMi Variance Reduction Techniques for Simulating Neutron Scintillation Detector Response
NASA Astrophysics Data System (ADS)
Prasad, Shikha
Scintillation detectors have emerged as a viable He-3 replacement technology in the field of nuclear nonproliferation and safeguards. The scintillation light produced in the detectors is dependent on the energy deposited and the nucleus with which the interaction occurs. For neutrons interacting with hydrogen in organic liquid scintillation detectors, the energy-to-light conversion process is nonlinear. MCNPX-PoliMi is a Monte Carlo Code that has been used for simulating this detailed scintillation physics; however, until now, simulations have only been done in analog mode. Analog Monte Carlo simulations can take long times to run, especially in the presence of shielding and large source-detector distances, as in the case of typical nonproliferation problems. In this thesis, two nonanalog approaches to speed up MCNPX-PoliMi simulations of neutron scintillation detector response have been studied. In the first approach, a response matrix method (RMM) is used to efficiently calculate neutron pulse height distributions (PHDs). This method combines the neutron current incident on the detector face with an MCNPX-PoliMi-calculated response matrix to generate PHDs. The PHD calculations and their associated uncertainty are compared for a polyethylene-shielded and lead-shielded Cf-252 source for three different techniques: fully analog MCNPX-PoliMi, the RMM, and the RMM with source biasing. The RMM with source biasing reduces computation time or increases the figure-of-merit on an average by a factor of 600 for polyethylene and 300 for lead shielding (when compared to the fully analog calculation). The simulated neutron PHDs show good agreement with the laboratory measurements, thereby validating the RMM. In the second approach, MCNPX-PoliMi simulations are performed with the aid of variance reduction techniques. This is done by separating the analog and nonanalog components of the simulations. Inside the detector region, where scintillation light is produced, no variance
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Sempau, J.; Brualla, L.
2012-05-01
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.
Hybrid mesh generation using advancing reduction technique
Technology Transfer Automated Retrieval System (TEKTRAN)
This study presents an extension of the application of the advancing reduction technique to the hybrid mesh generation. The proposed algorithm is based on a pre-generated rectangle mesh (RM) with a certain orientation. The intersection points between the two sets of perpendicular mesh lines in RM an...
Nordström, Jan Wahlsten, Markus
2015-02-01
We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for the Euler equations.
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
NASA Astrophysics Data System (ADS)
Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa
2014-07-01
We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.
Bias and variance reduction in estimating the proportion of true-null hypotheses
Cheng, Yebin; Gao, Dexiang; Tong, Tiejun
2015-01-01
When testing a large number of hypotheses, estimating the proportion of true nulls, denoted by \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document}, becomes increasingly important. This quantity has many applications in practice. For instance, a reliable estimate of \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} can eliminate the conservative bias of the Benjamini–Hochberg procedure on controlling the false discovery rate. It is known that most methods in the literature for estimating \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} are conservative. Recently, some attempts have been paid to reduce such estimation bias. Nevertheless, they are either over bias corrected or suffering from an unacceptably large estimation variance. In this paper, we propose a new method for estimating \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} that aims to reduce the bias and variance of the estimation simultaneously. To achieve this, we first utilize the probability density functions of false-null \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage
Golzari, Fahimeh; Jalili, Saeed
2015-07-21
In protein function prediction (PFP) problem, the goal is to predict function of numerous well-sequenced known proteins whose function is not still known precisely. PFP is one of the special and complex problems in machine learning domain in which a protein (regarded as instance) may have more than one function simultaneously. Furthermore, the functions (regarded as classes) are dependent and also are organized in a hierarchical structure in the form of a tree or directed acyclic graph. One of the common learning methods proposed for solving this problem is decision trees in which, by partitioning data into sharp boundaries sets, small changes in the attribute values of a new instance may cause incorrect change in predicted label of the instance and finally misclassification. In this paper, a Variance Reduction based Binary Fuzzy Decision Tree (VR-BFDT) algorithm is proposed to predict functions of the proteins. This algorithm just fuzzifies the decision boundaries instead of converting the numeric attributes into fuzzy linguistic terms. It has the ability of assigning multiple functions to each protein simultaneously and preserves the hierarchy consistency between functional classes. It uses the label variance reduction as splitting criterion to select the best "attribute-value" at each node of the decision tree. The experimental results show that the overall performance of the proposed algorithm is promising.
Advancing Greenhouse Gas Reductions through Affordable Housing
James City County, Virginia, is an EPA Climate Showcase Community. EPA’s Climate Showcase Communities Program helps local governments and tribal nations pilot innovative, cost-effective and replicable community-based greenhouse gas reduction projects.
Ramos, M; Ferrer, S; Verdu, G
2005-01-01
Mammography is a non-invasive technique used for the detection of breast lesions. The use of this technique in a breast screening program requires a continuous quality control testing in mammography units for ensuring a minimum absorbed glandular dose without modifying image quality. Digital mammography has been progressively introduced in screening centers, since recent evolution of photostimulable phosphor detectors. The aim of this work is the validation of a methodology for reconstructing digital images of a polymethyl-methacrylate (PMMA) phantom (P01 model) under pure Monte Carlo techniques. A reference image has been acquired for this phantom under automatic exposure control (AEC) mode (28 kV and 14 mAs). Some variance reduction techniques (VRT) have been applied to improve the efficiency of the simulations, defined as the number of particles reaching the imaging system per starting particle. All images have been used and stored in DICOM format. The results prove that the signal-to-noise ratio (SNR) of the reconstructed images have been increased with the use of the VRT, showing similar values between different employed tallies. As a conclusion, these images could be used during quality control testing for showing any deviation of the exposition parameters from the desired reference level.
NASA Astrophysics Data System (ADS)
Ezzati, A. O.; Sohrabpour, M.
2013-02-01
In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000.
NASA Astrophysics Data System (ADS)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland
Evaluation of the Advanced Subsonic Technology Program Noise Reduction Benefits
NASA Technical Reports Server (NTRS)
Golub, Robert A.; Rawls, John W., Jr.; Russell, James W.
2005-01-01
This report presents a detailed evaluation of the aircraft noise reduction technology concepts developed during the course of the NASA/FAA Advanced Subsonic Technology (AST) Noise Reduction Program. In 1992, NASA and the FAA initiated a cosponsored, multi-year program with the U.S. aircraft industry focused on achieving significant advances in aircraft noise reduction. The program achieved success through a systematic development and validation of noise reduction technology. Using the NASA Aircraft Noise Prediction Program, the noise reduction benefit of the technologies that reached a NASA technology readiness level of 5 or 6 were applied to each of four classes of aircraft which included a large four engine aircraft, a large twin engine aircraft, a small twin engine aircraft and a business jet. Total aircraft noise reductions resulting from the implementation of the appropriate technologies for each class of aircraft are presented and compared to the AST program goals.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Oxidation-Reduction Resistance of Advanced Copper Alloys
NASA Technical Reports Server (NTRS)
Greenbauer-Seng, L. (Technical Monitor); Thomas-Ogbuji, L.; Humphrey, D. L.; Setlock, J. A.
2003-01-01
Resistance to oxidation and blanching is a key issue for advanced copper alloys under development for NASA's next generation of reusable launch vehicles. Candidate alloys, including dispersion-strengthened Cu-Cr-Nb, solution-strengthened Cu-Ag-Zr, and ODS Cu-Al2O3, are being evaluated for oxidation resistance by static TGA exposures in low-p(O2) and cyclic oxidation in air, and by cyclic oxidation-reduction exposures (using air for oxidation and CO/CO2 or H2/Ar for reduction) to simulate expected service environments. The test protocol and results are presented.
20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION ...
20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION FACILITY USED TO CUT PLUTONIUM CONTAMINATED GLOVE BOXES AND MISCELLANEOUS LARGE EQUIPMENT DOWN TO AN EASILY PACKAGED SIZE FOR DISPOSAL. ROUTINE OPERATIONS WERE PERFORMED REMOTELY, USING HOISTS, MANIPULATOR ARMS, AND GLOVE PORTS TO REDUCE BOTH INTENSITY AND TIME OF RADIATION EXPOSURE TO THE OPERATOR. (11/6/86) - Rocky Flats Plant, Plutonium Fabrication, Central section of Plant, Golden, Jefferson County, CO
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Lawrence, Roland W.
2000-01-01
As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.
NASA Noise Reduction Program for Advanced Subsonic Transports
NASA Technical Reports Server (NTRS)
Stephens, David G.; Cazier, F. W., Jr.
1995-01-01
Aircraft noise is an important byproduct of the world's air transportation system. Because of growing public interest and sensitivity to noise, noise reduction technology is becoming increasingly important to the unconstrained growth and utilization of the air transportation system. Unless noise technology keeps pace with public demands, noise restrictions at the international, national and/or local levels may unduly constrain the growth and capacity of the system to serve the public. In recognition of the importance of noise technology to the future of air transportation as well as the viability and competitiveness of the aircraft that operate within the system, NASA, the FAA and the industry have developed noise reduction technology programs having application to virtually all classes of subsonic and supersonic aircraft envisioned to operate far into the 21st century. The purpose of this paper is to describe the scope and focus of the Advanced Subsonic Technology Noise Reduction program with emphasis on the advanced technologies that form the foundation of the program.
Fluid Mechanics, Drag Reduction and Advanced Configuration Aeronautics
NASA Technical Reports Server (NTRS)
Bushnell, Dennis M.
2000-01-01
This paper discusses Advanced Aircraft configurational approaches across the speed range, which are either enabled, or greatly enhanced, by clever Flow Control. Configurations considered include Channel Wings with circulation control for VTOL (but non-hovering) operation with high cruise speed, strut-braced CTOL transports with wingtip engines and extensive ('natural') laminar flow control, a midwing double fuselage CTOL approach utilizing several synergistic methods for drag-due-to-lift reduction, a supersonic strut-braced configuration with order of twice the L/D of current approaches and a very advanced, highly engine flow-path-integrated hypersonic cruise machine. This paper indicates both the promise of synergistic flow control approaches as enablers for 'Revolutions' in aircraft performance and fluid mechanic 'areas of ignorance' which impede their realization and provide 'target-rich' opportunities for Fluids Research.
Recent advances in the kinetics of oxygen reduction
Adzic, R.
1996-07-01
Oxygen reduction is considered an important electrocatalytic reaction; the most notable need remains improvement of the catalytic activity of existing metal electrocatalysts and development of new ones. A review is given of new advances in the understanding of reaction kinetics and improvements of the electrocatalytic properties of some surfaces, with focus on recent studies of relationship of the surface properties to its activity and reaction kinetics. The urgent need is to improve catalytic activity of Pt and synthesize new, possibly non- noble metal catalysts. New experimental techniques for obtaining new level of information include various {ital in situ} spectroscopies and scanning probes, some involving synchrotron radiation. 138 refs, 18 figs, 2 tabs.
Potential for Landing Gear Noise Reduction on Advanced Aircraft Configurations
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Nickol, Craig L.; Burley, Casey L.; Guo, Yueping
2016-01-01
The potential of significantly reducing aircraft landing gear noise is explored for aircraft configurations with engines installed above the wings or the fuselage. An innovative concept is studied that does not alter the main gear assembly itself but does shorten the main strut and integrates the gear in pods whose interior surfaces are treated with acoustic liner. The concept is meant to achieve maximum noise reduction so that main landing gears can be eliminated as a major source of airframe noise. By applying this concept to an aircraft configuration with 2025 entry-into-service technology levels, it is shown that compared to noise levels of current technology, the main gear noise can be reduced by 10 EPNL dB, bringing the main gear noise close to a floor established by other components such as the nose gear. The assessment of the noise reduction potential accounts for design features for the advanced aircraft configuration and includes the effects of local flow velocity in and around the pods, gear noise reflection from the airframe, and reflection and attenuation from acoustic liner treatment on pod surfaces and doors. A technical roadmap for maturing this concept is discussed, and the possible drag increase at cruise due to the addition of the pods is identified as a challenge, which needs to be quantified and minimized possibly with the combination of detailed design and application of drag reduction technologies.
Active Vibration Reduction of the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Metscher, Jonathan F.; Schifer, Nicholas A.
2016-01-01
Stirling Radioisotope Power Systems (RPS) are being developed as an option to provide power on future space science missions where robotic spacecraft will orbit, flyby, land or rove. A Stirling Radioisotope Generator (SRG) could offer space missions a more efficient power system that uses one fourth of the nuclear fuel and decreases the thermal footprint compared to the current state of the art. The Stirling Cycle Technology Development (SCTD) Project is funded by the RPS Program to developing Stirling-based subsystems, including convertors and controller maturation efforts that have resulted in high fidelity hardware like the Advanced Stirling Radioisotope Generator (ASRG), Advanced Stirling Convertor (ASC), and ASC Controller Unit (ACU). The SCTD Project also performs research to develop less mature technologies with a wide variety of objectives, including increasing temperature capability to enable new environments, improving system reliability or fault tolerance, reducing mass or size, and developing advanced concepts that are mission enabling. Active vibration reduction systems (AVRS), or "balancers", have historically been developed and characterized to provide fault tolerance for generator designs that incorporate dual-opposed Stirling convertors or enable single convertor, or small RPS, missions. Balancers reduce the dynamic disturbance forces created by the power piston and displacer internal moving components of a single operating convertor to meet spacecraft requirements for induced disturbance force. To improve fault tolerance for dual-opposed configurations and enable single convertor configurations, a breadboard AVRS was implemented on the Advanced Stirling Convertor (ASC). The AVRS included a linear motor, a motor mount, and a closed-loop controller able to balance out the transmitted peak dynamic disturbance using acceleration feedback. Test objectives included quantifying power and mass penalty and reduction in transmitted force over a range of ASC
Active Vibration Reduction of the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Metscher, Jonathan F.; Schifer, Nicholas A.
2016-01-01
Stirling Radioisotope Power Systems (RPS) are being developed as an option to provide power on future space science missions where robotic spacecraft will orbit, flyby, land or rove. A Stirling Radioisotope Generator (SRG) could offer space missions a more efficient power system that uses one fourth of the nuclear fuel and decreases the thermal footprint compared to the current state of the art. The Stirling Cycle Technology Development (SCTD) Project is funded by the RPS Program to developing Stirling-based subsystems, including convertors and controller maturation efforts that have resulted in high fidelity hardware like the Advanced Stirling Radioisotope Generator (ASRG), Advanced Stirling Convertor (ASC), and ASC Controller Unit (ACU). The SCTD Project also performs research to develop less mature technologies with a wide variety of objectives, including increasing temperature capability to enable new environments, improving system reliability or fault tolerance, reducing mass or size, and developing advanced concepts that are mission enabling. Active vibration reduction systems (AVRS), or "balancers", have historically been developed and characterized to provide fault tolerance for generator designs that incorporate dual-opposed Stirling convertors or enable single convertor, or small RPS, missions. Balancers reduce the dynamic disturbance forces created by the power piston and displacer internal moving components of a single operating convertor to meet spacecraft requirements for induced disturbance force. To improve fault tolerance for dual-opposed configurations and enable single convertor configurations, a breadboard AVRS was implemented on the Advanced Stirling Convertor (ASC). The AVRS included a linear motor, a motor mount, and a closed-loop controller able to balance out the transmitted peak dynamic disturbance using acceleration feedback. Test objectives included quantifying power and mass penalty and reduction in transmitted force over a range of ASC
Advancing Development and Greenhouse Gas Reductions in Vietnam's Wind Sector
Bilello, D.; Katz, J.; Esterly, S.; Ogonowski, M.
2014-09-01
Clean energy development is a key component of Vietnam's Green Growth Strategy, which establishes a target to reduce greenhouse gas (GHG) emissions from domestic energy activities by 20-30 percent by 2030 relative to a business-as-usual scenario. Vietnam has significant wind energy resources, which, if developed, could help the country reach this target while providing ancillary economic, social, and environmental benefits. Given Vietnam's ambitious clean energy goals and the relatively nascent state of wind energy development in the country, this paper seeks to fulfill two primary objectives: to distill timely and useful information to provincial-level planners, analysts, and project developers as they evaluate opportunities to develop local wind resources; and, to provide insights to policymakers on how coordinated efforts may help advance large-scale wind development, deliver near-term GHG emission reductions, and promote national objectives in the context of a low emission development framework.
Biologic lung volume reduction therapy for advanced homogeneous emphysema.
Refaely, Y; Dransfield, M; Kramer, M R; Gotfried, M; Leeds, W; McLennan, G; Tewari, S; Krasna, M; Criner, G J
2010-07-01
This report summarises phase 2 trial results of biologic lung volume reduction (BioLVR) for treatment of advanced homogeneous emphysema. BioLVR therapy was administered bronchoscopically to 25 patients with homogeneous emphysema in an open-labelled study. Eight patients received low dose (LD) treatment with 10 mL per site at eight subsegments; 17 received high dose (HD) treatment with 20 mL per site at eight subsegments. Safety was assessed in terms of medical complications during 6-month follow-up. Efficacy was assessed in terms of change from baseline in gas trapping, spirometry, diffusing capacity, exercise capacity, dyspnoea and health-related quality of life. There were no deaths or serious medical complications during the study. A statistically significant reduction in gas trapping was observed at 3-month follow-up among HD patients, but not LD patients. At 6 months, changes from baseline in forced expiratory volume in 1 s (-8.0+/-13.93% versus +13.8+/-20.26%), forced vital capacity (-3.9+/-9.41% versus +9.0+/-13.01%), residual volume/total lung capacity ratio (-1.4+/-13.82% versus -5.4+/-12.14%), dyspnoea scores (-0.4+/-1.27 versus -0.8+/-0.73 units) and St George's Respiratory Questionnaire total domain scores (-4.9+/-8.3 U versus -12.2+/-12.38 units) were better with HD than with LD therapy. BioLVR therapy with 20 mL per site at eight subsegmental sites may be a safe and effective therapy in patients with advanced homogeneous emphysema.
Analytic investigation of advancing blade drag reduction by tip modifications
NASA Technical Reports Server (NTRS)
Tauber, M. E.
1978-01-01
Analytic techniques were applied to study the effect on the performance of the nonlifting advancing blade when the outboard 5% of the blade is modified to reduce drag. The tip modifications studied consisted of reducing airfoil thickness, sweepback, and planform taper. The reductions in instantaneous drag and torque were calculated for tip speed ratios from about 0.19 to 0.30, corresponding to advancing blade tip Mach numbers of 0.855 to 0.936, respectively. Approximations required in the analysis introduce uncertainties into the computed absolute values of drag and torque; however, the differences in the quantities should be a fairly reliable measure of the effect of changing tip geometry. For example, at the highest tip speed, instantaneous drag, and torque were reduced by 20% and 24%, respectively, for tip sweep of 40 deg on a blade using an NACA 0010 airfoil and by comparable amounts for 30-deg sweep on a blade having an NACA 0012 airfoil section. The present method should prove to be a useful, inexpensive technique for identifying promising configurations for additional study and testing.
Advancing the research agenda for diagnostic error reduction.
Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep
2013-10-01
Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.
Low cost biological lung volume reduction therapy for advanced emphysema
Bakeer, Mostafa; Abdelgawad, Taha Taha; El-Metwaly, Raed; El-Morsi, Ahmed; El-Badrawy, Mohammad Khairy; El-Sharawy, Solafa
2016-01-01
Background Bronchoscopic lung volume reduction (BLVR), using biological agents, is one of the new alternatives to lung volume reduction surgery. Objectives To evaluate efficacy and safety of biological BLVR using low cost agents including autologous blood and fibrin glue. Methods Enrolled patients were divided into two groups: group A (seven patients) in which autologous blood was used and group B (eight patients) in which fibrin glue was used. The agents were injected through a triple lumen balloon catheter via fiberoptic bronchoscope. Changes in high resolution computerized tomography (HRCT) volumetry, pulmonary function tests, symptoms, and exercise capacity were evaluated at 12 weeks postprocedure as well as for complications. Results In group A, at 12 weeks postprocedure, there was significant improvement in the mean value of HRCT volumetry and residual volume/total lung capacity (% predicted) (P-value: <0.001 and 0.038, respectively). In group B, there was significant improvement in the mean value of HRCT volumetry and (residual volume/total lung capacity % predicted) (P-value: 0.005 and 0.004, respectively). All patients tolerated the procedure with no mortality. Conclusion BLVR using autologous blood and locally prepared fibrin glue is a promising method for therapy of advanced emphysema in term of efficacy, safety as well as cost effectiveness. PMID:27536091
Advances in volcano monitoring and risk reduction in Latin America
NASA Astrophysics Data System (ADS)
McCausland, W. A.; White, R. A.; Lockhart, A. B.; Marso, J. N.; Assitance Program, V. D.; Volcano Observatories, L. A.
2014-12-01
We describe results of cooperative work that advanced volcanic monitoring and risk reduction. The USGS-USAID Volcano Disaster Assistance Program (VDAP) was initiated in 1986 after disastrous lahars during the 1985 eruption of Nevado del Ruiz dramatizedthe need to advance international capabilities in volcanic monitoring, eruption forecasting and hazard communication. For the past 28 years, VDAP has worked with our partners to improve observatories, strengthen monitoring networks, and train observatory personnel. We highlight a few of the many accomplishments by Latin American volcano observatories. Advances in monitoring, assessment and communication, and lessons learned from the lahars of the 1985 Nevado del Ruiz eruption and the 1994 Paez earthquake enabled the Servicio Geológico Colombiano to issue timely, life-saving warnings for 3 large syn-eruptive lahars at Nevado del Huila in 2007 and 2008. In Chile, the 2008 eruption of Chaitén prompted SERNAGEOMIN to complete a national volcanic vulnerability assessment that led to a major increase in volcano monitoring. Throughout Latin America improved seismic networks now telemeter data to observatories where the decades-long background rates and types of seismicity have been characterized at over 50 volcanoes. Standardization of the Earthworm data acquisition system has enabled data sharing across international boundaries, of paramount importance during both regional tectonic earthquakes and during volcanic crises when vulnerabilities cross international borders. Sharing of seismic forecasting methods led to the formation of the international organization of Latin American Volcano Seismologists (LAVAS). LAVAS courses and other VDAP training sessions have led to international sharing of methods to forecast eruptions through recognition of precursors and to reduce vulnerabilities from all volcano hazards (flows, falls, surges, gas) through hazard assessment, mapping and modeling. Satellite remote sensing data
Advanced Reduction Processes: A New Class of Treatment Processes
Vellanki, Bhanu Prakash; Batchelor, Bill; Abdel-Wahab, Ahmed
2013-01-01
Abstract A new class of treatment processes called advanced reduction processes (ARPs) is proposed. ARPs combine activation methods and reducing agents to form highly reactive reducing radicals that degrade oxidized contaminants. Batch screening experiments were conducted to identify effective ARPs by applying several combinations of activation methods (ultraviolet light, ultrasound, electron beam, and microwaves) and reducing agents (dithionite, sulfite, ferrous iron, and sulfide) to degradation of four target contaminants (perchlorate, nitrate, perfluorooctanoic acid, and 2,4 dichlorophenol) at three pH-levels (2.4, 7.0, and 11.2). These experiments identified the combination of sulfite activated by ultraviolet light produced by a low-pressure mercury vapor lamp (UV-L) as an effective ARP. More detailed kinetic experiments were conducted with nitrate and perchlorate as target compounds, and nitrate was found to degrade more rapidly than perchlorate. Effectiveness of the UV-L/sulfite treatment process improved with increasing pH for both perchlorate and nitrate. We present the theory behind ARPs, identify potential ARPs, demonstrate their effectiveness against a wide range of contaminants, and provide basic experimental evidence in support of the fundamental hypothesis for ARP, namely, that activation methods can be applied to reductants to form reducing radicals that degrade oxidized contaminants. This article provides an introduction to ARPs along with sufficient data to identify potentially effective ARPs and the target compounds these ARPs will be most effective in destroying. Further research will provide a detailed analysis of degradation kinetics and the mechanisms of contaminant destruction in an ARP. PMID:23840160
Mindfulness-Based Stress Reduction in Advanced Nursing Practice
Williams, Hants; Simmons, Leigh Ann; Tanabe, Paula
2015-01-01
The aim of this article is to discuss how advanced practice nurses (APNs) can incorporate mindfulness-based stress reduction (MBSR) as a nonpharmacologic clinical tool in their practice. Over the last 30 years, patients and providers have increasingly used complementary and holistic therapies for the nonpharmacologic management of acute and chronic diseases. Mindfulness-based interventions, specifically MBSR, have been tested and applied within a variety of patient populations. There is strong evidence to support that the use of MBSR can improve a range of biological and psychological outcomes in a variety of medical illnesses, including acute and chronic pain, hypertension, and disease prevention. This article will review the many ways APNs can incorporate MBSR approaches for health promotion and disease/symptom management into their practice. We conclude with a discussion of how nurses can obtain training and certification in MBSR. Given the significant and growing literature supporting the use of MBSR in the prevention and treatment of chronic disease, increased attention on how APNs can incorporate MBSR into clinical practice is necessary. PMID:25673578
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; Craig, Kellie D.
2011-01-01
The intent of the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort is to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Key Concepts (1) Offerors must propose an Advanced Booster concept that meets SLS Program requirements (2) Engineering Demonstration and/or Risk Reduction must relate to the Offeror s Advanced Booster concept (3) NASA Research Announcement (NRA) will not be prescriptive in defining Engineering Demonstration and/or Risk Reduction
NASA Astrophysics Data System (ADS)
Wenner, Michael T.
Obtaining the solution to the linear Boltzmann equation is often is often a daunting task. The time-independent form is an equation of six independent variables which cannot be solved analytically in all but some special problems. Instead, numerical approaches have been devised. This work focuses on improving Monte Carlo methods for its solution in eigenvalue form. First, a statistical method of stationarity detection called the KPSS test adapted as a Monte Carlo eigenvalue source convergence test. The KPSS test analyzes the source center of mass series which was chosen since it should be indicative of overall source behavior, and is physically easy to understand. A source center of mass plot alone serves as a good visual source convergence diagnostic. The KPSS test and three different information theoretic diagnostics were implemented into the well known KENOV.a code inside of the SCALE (version 5) code package from Oak Ridge National Laboratory and compared through analysis of a simple problem and several difficult source convergence benchmarks. Results showed that the KPSS test can add to the overall confidence by identifying more problematic simulations than without its usage. Not only this, the source center of mass information on hand visually aids in the understanding of the problem physics. The second major focus of this dissertation concerned variance reduction methodologies for Monte Carlo eigenvalue problems. The CADIS methodology, based on importance sampling, was adapted to the eigenvalue problems. It was shown that the straight adaption of importance sampling can provide a significant variance reduction in determination of keff (in cases studied up to 30%?). A modified version of this methodology was developed which utilizes independent deterministic importance simulations. In this new methodology, each particle is simulated multiple times, once to every other discretized source region utilizing the importance for that region only. Since each particle
Advanced supersonic propulsion study. [with emphasis on noise level reduction
NASA Technical Reports Server (NTRS)
Sabatella, J. A. (Editor)
1974-01-01
A study was conducted to determine the promising propulsion systems for advanced supersonic transport application, and to identify the critical propulsion technology requirements. It is shown that noise constraints have a major effect on the selection of the various engine types and cycle parameters. Several promising advanced propulsion systems were identified which show the potential of achieving lower levels of sideline jet noise than the first generation supersonic transport systems. The non-afterburning turbojet engine, utilizing a very high level of jet suppression, shows the potential to achieve FAR 36 noise level. The duct-heating turbofan with a low level of jet suppression is the most attractive engine for noise levels from FAR 36 to FAR 36 minus 5 EPNdb, and some series/parallel variable cycle engines show the potential of achieving noise levels down to FAR 36 minus 10 EPNdb with moderate additional penalty. The study also shows that an advanced supersonic commercial transport would benefit appreciably from advanced propulsion technology. The critical propulsion technology needed for a viable supersonic propulsion system, and the required specific propulsion technology programs are outlined.
Advances in reduction techniques for tire contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1995-01-01
Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.
Knighton, Robert W.; Gregori, Giovanni; Budenz, Donald L.
2012-01-01
Purpose. To examine the similarities and differences in the shape of the macular ganglion cell plus inner plexiform layers (GCL+IPL) in a healthy human population, and seek methods to reduce population variance and improve discriminating power. Methods. Macular images of the right eyes of 23 healthy subjects were obtained with spectral domain optical coherence tomography. The thickness of GCL+IPL was determined by manual segmentation, areas with blood vessels were removed, and the resulting maps were fit by smooth surfaces in polar coordinates centered on the fovea. Results. The mean GCL+IPL thickness formed a horizontal elliptical annulus. The variance increased toward the center and was highest near the foveal edge. Individual maps differed in foveal size and overall GCL+IPL thickness. Foveal size correction by radially shifting individual maps to the same foveal size as the mean map reduced perifoveal variance. Thickness alignment by shifting individual maps axially, then radially, to match the mean map reduced overall variance. These transformations had very little effect on the population mean. Conclusions. Simple transformations of individual GCL+IPL thickness maps to a canonical form can considerably reduce the population variance in a sample of normal eyes, likely improving the ability to discriminate abnormal maps. The transformations considered here preserve the local geometry of the thickness maps. When used on a patient's map, they can produce a deviation map that provides a meaningful measurement of the size of local thickness deviations and allows estimation of the number of ganglion cells lost in a glaucomatous defect. PMID:22562512
Recent advancements in mechanical reduction methods: particulate systems.
Leleux, Jardin; Williams, Robert O
2014-03-01
The screening of new active pharmaceutical ingredients (APIs) has become more streamlined and as a result the number of new drugs in the pipeline is steadily increasing. However, a major limiting factor of new API approval and market introduction is the low solubility associated with a large percentage of these new drugs. While many modification strategies have been studied to improve solubility such as salt formation and addition of cosolvents, most provide only marginal success and have severe disadvantages. One of the most successful methods to date is the mechanical reduction of drug particle size, inherently increasing the surface area of the particles and, as described by the Noyes-Whitney equation, the dissolution rate. Drug micronization has been the gold standard to achieve these improvements; however, the extremely low solubility of some new chemical entities is not significantly affected by size reduction in this range. A reduction in size to the nanometric scale is necessary. Bottom-up and top-down techniques are utilized to produce drug crystals in this size range; however, as discussed in this review, top-down approaches have provided greater enhancements in drug usability on the industrial scale. The six FDA approved products that all exploit top-down approaches confirm this. In this review, the advantages and disadvantages of both approaches will be discussed in addition to specific top-down techniques and the improvements they contribute to the pharmaceutical field.
Noise exposure reduction of advanced high-lift systems
NASA Technical Reports Server (NTRS)
Haffner, Stephen W.
1995-01-01
The purpose of NASA Contract NAS1-20090 Task 3 was to investigate the potential for noise reduction that would result from improving the high-lift performance of conventional subsonic transports. The study showed that an increase in lift-to-drag ratio of 15 percent would reduce certification noise levels by about 2 EPNdB on approach, 1.5 EPNdB on cutback, and zero EPNdB on sideline. In most cases, noise contour areas would be reduced by 10 to 20 percent.
Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells
NASA Astrophysics Data System (ADS)
Ali, Mohamed Mahmoud; Kvande, Halvor
2017-02-01
There are two mainpreheating methods that are used nowadays for aluminum reduction cells. One is based on electrical resistance preheating with a thin bed of small coke and/or graphite particles between the anodes and the cathode carbon blocks. The other is flame preheating, where two or more gas or oil burners are used. Electrical resistance preheating is the oldest method, but is still frequently used by different aluminum producers. Many improvements have been made to this method by different companies over the last decade. In this paper, important points pertaining to the preparation and preheating of these cells, as well as measurements made during the preheating process and evaluation of the performance of the preheating, are illustrated. The preheating times of these cells were found to be between 36 h and 96 h for cell currents between 176 kA and 406 kA, while the resistance bed thickness was between 13 mm and 60 mm. The average cathode surface temperature at the end of the preheating was usually between 800°C and 950°C. The effect of the preheating methods on cell life is unclear and no quantifiable conclusions can be drawn. Some works carried out in the mathematical modeling area are also discussed. It is concluded that there is a need for more studies with real situations for preheated cells on the basis of actual measurements. The expected development in electrical resistance preheating of aluminum reduction cells is also summarized.
Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells
NASA Astrophysics Data System (ADS)
Ali, Mohamed Mahmoud; Kvande, Halvor
2016-06-01
An advanced carbon reactor subsystem for carbon dioxide reduction
NASA Technical Reports Server (NTRS)
Noyes, Gary P.; Cusick, Robert J.
1986-01-01
An evaluation is presented of the development status of an advanced carbon-reactor subsystem (ACRS) for the production of water and dense, solid carbon from CO2 and hydrogen, as required in physiochemical air revitalization systems for long-duration manned space missions. The ACRS consists of a Sabatier Methanation Reactor (SMR) that reduces CO2 with hydrogen to form methane and water, a gas-liquid separator to remove product water from the methane, and a Carbon Formation Reactor (CFR) to pyrolize methane to carbon and hydrogen; the carbon is recycled to the SMR, while the produce carbon is periodically removed from the CFR. A preprototype ACRS under development for the NASA Space Station is described.
Development of an advanced Sabatier CO2 reduction subsystem
NASA Technical Reports Server (NTRS)
Kleiner, G. N.; Cusick, R. J.
1981-01-01
A preprototype Sabatier CO2 reduction subsystem was successfully designed, fabricated and tested. The lightweight, quick starting (less than 5 minutes) reactor utlizes a highly active and physically durable methanation catalyst composed of ruthenium on alumina. The use of this improved catalyst permits a simple, passively controlled reactor design with an average lean component H2/CO2 conversion efficiency of over 99% over a range of H2/CO2 molar ratios of 1.8 to 5 while operating with process flows equivalent to a crew size of up to five persons. The subsystem requires no heater operation after start-up even during simulated 55 minute lightside/39 minute darkside orbital operation.
NASA Technical Reports Server (NTRS)
Byrne, Vicky; Orndoff, Evelyne; Poritz, Darwin; Schlesinger, Thilini
2013-01-01
All human space missions require significant logistical mass and volume that will become an excessive burden for long duration missions beyond low Earth orbit. The goal of the Advanced Exploration Systems (AES) Logistics Reduction & Repurposing (LRR) project is to bring new ideas and technologies that will enable human presence in farther regions of space. The LRR project has five tasks: 1) Advanced Clothing System (ACS) to reduce clothing mass and volume, 2) Logistics to Living (L2L) to repurpose existing cargo, 3) Heat Melt Compactor (HMC) to reprocess materials in space, 4) Trash to Gas (TTG) to extract useful gases from trash, and 5) Systems Engineering and Integration (SE&I) to integrate these logistical components. The current International Space Station (ISS) crew wardrobe has already evolved not only to reduce some of the logistical burden but also to address crew preference. The ACS task is to find ways to further reduce this logistical burden while examining human response to different types of clothes. The ACS task has been broken into a series of studies on length of wear of various garments: 1) three small studies conducted through other NASA projects (MMSEV, DSH, HI-SEAS) focusing on length of wear of garments treated with an antimicrobial finish; 2) a ground study, which is the subject of this report, addressing both length of wear and subject perception of various types of garments worn during aerobic exercise; and 3) an ISS study replicating the ground study, and including every day clothing to collect information on perception in reduced gravity in which humans experience physiological changes. The goal of the ground study is first to measure how long people can wear the same exercise garment, depending on the type of fabric and the presence of antimicrobial treatment, and second to learn why. Human factors considerations included in the study consist of the Institutional Review Board approval, test protocol and participants' training, and a web
Wang, Zhi-hua; Zhou, Jun-hu; Zhang, Yan-wei; Lu, Zhi-min; Fan, Jian-ren; Cen, Ke-fa
2005-01-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 °C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 °C~1100 °C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503
Advancements in Steel for Weight Reduction of P900 Armor Plate
2008-12-01
were investigated as alternatives to MIL-PRF- 32269 steel alloys for application in P900 perforated armor currently used for Army ground combat...ADVANCEMENTS IN STEEL FOR WEIGHT REDUCTION OF P900 ARMOR PLATE R. A. Howell*, J. S. Montgomery Survivability Materials Branch Army Research Lab...Aberdeen Proving Grounds , MD 21001 D.C. Van Aken Missouri University for Science and Technology Rolla, MO 65401 ABSTRACT Ballistic tests
NASA Technical Reports Server (NTRS)
Kirsch, Paul J.; Hayes, Jane; Zelinski, Lillian
2000-01-01
This special case study report presents the Science and Engineering Technical Assessments (SETA) team's findings for exploring the correlation between the underlying models of Advanced Risk Reduction Tool (ARRT) relative to how it identifies, estimates, and integrates Independent Verification & Validation (IV&V) activities. The special case study was conducted under the provisions of SETA Contract Task Order (CTO) 15 and the approved technical approach documented in the CTO-15 Modification #1 Task Project Plan.
NASA's Space Launch System Advanced Booster Engineering Demonstration and/or Risk Reduction Efforts
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; Dumbacher, Daniel L.; May, Todd A.
2012-01-01
The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters (SRBs) for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, with a stated intent to reduce risks leading to an affordable advanced booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the advanced boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the advanced boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit (BEO), opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable advanced booster that meets the SLS performance requirements
NASA's Space Launch System Advanced Booster Engineering Demonstration and Risk Reduction Efforts
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; May, Todd; Dumbacher, Daniel
2012-01-01
The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, and its stated intent was to reduce risks leading to an affordable Advanced Booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the Advanced Boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the Advanced Boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit, opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable Advanced Booster that meets the SLS performance requirements. Demonstrations and
Tremor reduction by subthalamic nucleus stimulation and medication in advanced Parkinson's disease.
Blahak, Christian; Wöhrle, Johannes C; Capelle, Hans-Holger; Bäzner, Hansjörg; Grips, Eva; Weigel, Ralf; Hennerici, Michael G; Krauss, Joachim K
2007-02-01
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) has proved to be effective for tremor in Parkinson's disease (PD). Most of the recent studies used only clinical data to analyse tremor reduction. The objective of our study was to quantify tremor reduction by STN DBS and antiparkinsonian medication in elderly PD patients using an objective measuring system. Amplitude and frequency of resting tremor and re-emergent resting tremor during postural tasks were analysed using an ultrasound-based measuring system and surface electromyography. In a prospective study design nine patients with advanced PD were examined preoperatively off and on medication, and twice postoperatively during four treatment conditions: off treatment, on STN DBS, on medication, and on STN DBS plus medication. While both STN DBS and medication reduced tremor amplitude, STN DBS alone and the combination of medication and STN DBS were significantly superior to pre- and postoperative medication. STN DBS but not medication increased tremor frequency, and off treatment tremor frequency was significantly reduced postoperatively compared to baseline. These findings demonstrate that STN DBS is highly effective in elderly patients with advanced PD and moderate preoperative tremor reduction by medication. Thus, with regard to the advanced impact on the other parkinsonian symptoms, STN DBS can replace thalamic stimulation in this cohort of patients. Nevertheless, medication was still effective postoperatively and may act synergistically. The significantly superior efficacy of STN DBS on tremor amplitude and its impact on tremor frequency in contrast to medication might be explained by the influence of STN DBS on additional neural circuits independent from dopaminergic neurotransmission.
Potential reduction of en route noise from an advanced turboprop aircraft
NASA Technical Reports Server (NTRS)
Dittmar, James H.
1990-01-01
When the en route noise of a representative aircraft powered by an eight-blade SR-7 propeller was previously calculated, the noise level was cited as a possible concern associated with the acceptance of advanced turboprop aircraft. Some potential methods for reducing the en route noise were then investigated and are reported. Source noise reductions from increasing the blade number and from operating at higher rotative speed to reach a local minimum noise point were investigated. Greater atmospheric attenuations for higher blade passing frequencies were also indicated. Potential en route noise reductions from these methods were calculated as 9.5 dB (6.5 dB(A)) for a 10-blade redesigned propeller and 15.5 dB (11 dB(A)) for a 12-blade redesigned propeller.
Impacts of natural organic matter on perchlorate removal by an advanced reduction process.
Duan, Yuhang; Batchelor, Bill
2014-01-01
Perchlorate can be destroyed by Advanced Reduction Processes (ARPs) that combine chemical reductants (e.g., sulfite) with activating methods (e.g., UV light) in order to produce highly reactive reducing free radicals that are capable of rapid and effective perchlorate reduction. However, natural organic matter (NOM) exists widely in the environment and has the potential to influence perchlorate reduction by ARPs that use UV light as the activating method. Batch experiments were conducted to obtain data on the impacts of NOM and wavelength of light on destruction of perchlorate by the ARPs that use sulfite activated by UV light produced by low-pressure mercury lamps (UV-L) or by KrCl excimer lamps (UV-KrCl). The results indicate that NOM strongly inhibits perchlorate removal by both ARP, because it competes with sulfite for UV light. Even though the absorbance of sulfite is much higher at 222 nm than that at 254 nm, the results indicate that a smaller amount of perchlorate was removed with the UV-KrCl lamp (222 nm) than with the UV-L lamp (254 nm). The results of this study will help to develop the proper way to apply the ARPs as practical water treatment processes.
NASA Astrophysics Data System (ADS)
Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał
2016-08-01
The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Yeo, Seung-Gu; Kim, Dae Yong; Park, Ji Won; Oh, Jae Hwan; Kim, Sun Young; Chang, Hee Jin; Kim, Tae Hyun; Kim, Byung Chang; Sohn, Dae Kyung; Kim, Min Ju
2012-02-01
Purpose: To investigate the prognostic significance of tumor volume reduction rate (TVRR) after preoperative chemoradiotherapy (CRT) in locally advanced rectal cancer (LARC). Methods and Materials: In total, 430 primary LARC (cT3-4) patients who were treated with preoperative CRT and curative radical surgery between May 2002 and March 2008 were analyzed retrospectively. Pre- and post-CRT tumor volumes were measured using three-dimensional region-of-interest MR volumetry. Tumor volume reduction rate was determined using the equation TVRR (%) = (pre-CRT tumor volume - post-CRT tumor volume) Multiplication-Sign 100/pre-CRT tumor volume. The median follow-up period was 64 months (range, 27-99 months) for survivors. Endpoints were disease-free survival (DFS) and overall survival (OS). Results: The median TVRR was 70.2% (mean, 64.7% {+-} 22.6%; range, 0-100%). Downstaging (ypT0-2N0M0) occurred in 183 patients (42.6%). The 5-year DFS and OS rates were 77.7% and 86.3%, respectively. In the analysis that included pre-CRT and post-CRT tumor volumes and TVRR as continuous variables, only TVRR was an independent prognostic factor. Tumor volume reduction rate was categorized according to a cutoff value of 45% and included with clinicopathologic factors in the multivariate analysis; ypN status, circumferential resection margin, and TVRR were significant prognostic factors for both DFS and OS. Conclusions: Tumor volume reduction rate was a significant prognostic factor in LARC patients receiving preoperative CRT. Tumor volume reduction rate data may be useful for tailoring surgery and postoperative adjuvant therapy after preoperative CRT.
Risk reduction activities for an F-1-based advanced booster for NASA's Space Launch System
NASA Astrophysics Data System (ADS)
Crocker, A. M.; Doering, K. B.; Cook, S. A.; Meadows, R. G.; Lariviere, B. W.; Bachtel, F. D.
For NASA's Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) procurement, Dynetics, Inc. and Pratt & Whitney Rocketdyne (PWR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's goal of enabling competition on an affordable booster that meets the evolved capabilities of the SLS. During the ABEDRR effort, the Dynetics Team will apply state-of-the-art manufacturing and processing techniques to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. ABEDRR will use NASA test facilities to perform full-scale F-1 gas generator and powerpack hot-fire test campaigns for engine risk reduction. Dynetics will also fabricate and test a tank assembly to verify the structural design. The Dynetics Team is partnered with NASA through Space Act Agreements (SAAs) to maximize the expertise and capabilities applied to ABEDRR.
Rybnikova, V; Usman, M; Hanna, K
2016-09-01
Although the chemical reduction and advanced oxidation processes have been widely used individually, very few studies have assessed the combined reduction/oxidation approach for soil remediation. In the present study, experiments were performed in spiked sand and historically contaminated soil by using four synthetic nanoparticles (Fe(0), Fe/Ni, Fe3O4, Fe3 - x Ni x O4). These nanoparticles were tested firstly for reductive transformation of polychlorinated biphenyls (PCBs) and then employed as catalysts to promote chemical oxidation reactions (H2O2 or persulfate). Obtained results indicated that bimetallic nanoparticles Fe/Ni showed the highest efficiency in reduction of PCB28 and PCB118 in spiked sand (97 and 79 %, respectively), whereas magnetite (Fe3O4) exhibited a high catalytic stability during the combined reduction/oxidation approach. In chemical oxidation, persulfate showed higher PCB degradation extent than hydrogen peroxide. As expected, the degradation efficiency was found to be limited in historically contaminated soil, where only Fe(0) and Fe/Ni particles exhibited reductive capability towards PCBs (13 and 18 %). In oxidation step, the highest degradation extents were obtained in presence of Fe(0) and Fe/Ni (18-19 %). The increase in particle and oxidant doses improved the efficiency of treatment, but overall degradation extents did not exceed 30 %, suggesting that only a small part of PCBs in soil was available for reaction with catalyst and/or oxidant. The use of organic solvent or cyclodextrin to improve the PCB availability in soil did not enhance degradation efficiency, underscoring the strong impact of soil matrix. Moreover, a better PCB degradation was observed in sand spiked with extractable organic matter separated from contaminated soil. In contrast to fractions with higher particle size (250-500 and <500 μm), no PCB degradation was observed in the finest fraction (≤250 μm) having higher organic matter content. These findings
Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System
NASA Technical Reports Server (NTRS)
Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.
2015-01-01
The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the
Spectral Ambiguity of Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Briggs, J. L.; Younger, A. F.
1980-06-02
A materials selection test program was conducted to characterize optimum interior surface coatings for an advanced size reduction facility. The equipment to be processed by this facility consists of stainless steel apparatus (e.g., glove boxes, piping, and tanks) used for the chemical recovery of plutonium. Test results showed that a primary requirement for a satisfactory coating is ease of decontamination. A closely related concern is the resistance of paint films to nitric acid - plutonium environments. A vinyl copolymer base paint was the only coating, of eight paints tested, with properties that permitted satisfactory decontamination of plutonium and also performed equal to or better than the other paints in the chemical resistance, radiation stability, and impact tests.
Noise Reduction Potential of Large, Over-the-Wing Mounted, Advanced Turbofan Engines
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2000-01-01
As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Indeed, the noise goal for NASA's Aeronautics Enterprise calls for technologies that will help to provide a 20 EPNdB reduction relative to today's levels by the year 2022. Further, the large fan diameters of modem, increasingly higher bypass ratio engines pose a significant packaging and aircraft installation challenge. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large, ultra high bypass ratio cycles to continue, this over-the-wing design is believed to offer noise shielding benefits to observers on the ground. This paper describes the analytical certification noise predictions of a notional, long haul, commercial quadjet transport with advanced, high bypass engines mounted above the wing.
DEMONSTRATION OF AN ADVANCED INTEGRATED CONTROL SYSTEM FOR SIMULTANEOUS EMISSIONS REDUCTION
Suzanne Shea; Randhir Sehgal; Ilga Celmins; Andrew Maxson
2002-02-01
The primary objective of the project titled ''Demonstration of an Advanced Integrated Control System for Simultaneous Emissions Reduction'' was to demonstrate at proof-of-concept scale the use of an online software package, the ''Plant Environmental and Cost Optimization System'' (PECOS), to optimize the operation of coal-fired power plants by economically controlling all emissions simultaneously. It combines physical models, neural networks, and fuzzy logic control to provide both optimal least-cost boiler setpoints to the boiler operators in the control room, as well as optimal coal blending recommendations designed to reduce fuel costs and fuel-related derates. The goal of the project was to demonstrate that use of PECOS would enable coal-fired power plants to make more economic use of U.S. coals while reducing emissions.
Sakurai, Kenichi; Fujisaki, Shigeru; Nagashima, Saki; Maeda, Tetsuyo; Tomita, Ryouichi; Suzuki, Shuhei; Hara, Yukiko; Hirano, Tomohiro; Enomoto, Katsuhisa; Amano, Sadao
2014-11-01
We report the case of an elderly, advanced breast cancer patient with multiple bone metastases. Breast reduction surgery was useful for this patient. The patient was an 81-year-old woman who had a breast lump. A core needle biopsy for breast cancer led to a diagnosis of invasive ductal carcinoma. The mucinous carcinoma was estrogen receptor (ER) nd progesterone receptor (PgR) positive and HER2/neu negative. Due to patient complications, it was not possible to treat with chemotherapy. The patient was administrated aromatase inhibitors (AI) and zoledronic acid hydrate. However, the AI treatment was not effective, and so she was administered toremifene. Toremifene treatment was effective for 6 months, after which she received fulvestrant. Fulvestrant treatment maintained stable disease (SD)for 14 months. After 14 months of fulvestrant treatment, serum concentrations of the tumor markers CA15-3, CEA, and BCA225 increased. We therefore decided to perform surgical breast reduction surgery. The pathological diagnosis from the surgically resected specimen was mucinous carcinoma, positive for ER and HER2, and negative for PgR. After surgery, serum concentrations of the tumor markers decreased. Following surgery, the patient was administrated lapatinib plus denosumab plus fulvestrant. The patient remains well, without bone metastases, 2 years and 6 months after surgery.
NASA Technical Reports Server (NTRS)
Saiyed, Naseem H.
2000-01-01
Contents of this presentation include: Advanced Subsonic Technology (AST) goals and general information; Nozzle nomenclature; Nozzle schematics; Photograph of all baselines; Configurations tests and types of data acquired; and Engine cycle and plug geometry impact on EPNL.
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
Zhang, Yingying; Zhuang, Yao; Geng, Jinju; Ren, Hongqiang; Xu, Ke; Ding, Lili
2016-04-15
This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe(2+)/H2O2) and UV/H2O2 process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H2O2 process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H2O2 method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe(2+)/H2O2 had a molar ratio of 0.1 and a H2O2 concentration of 0.01molL(-1) with a pH of 3.0 and reaction time of 2h, 2.58-3.79 logs of target genes were removed. Under the initial effluent pH condition (pH=7.0), the removal was 2.26-3.35 logs. For the UV/H2O2 process, when the pH was 3.5 with a H2O2 concentration of 0.01molL(-1) accompanied by 30min of UV irradiation, all ARGs could achieve a reduction of 2.8-3.5 logs, and 1.55-2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H2O2 process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe(2+)/H2O2 molar ratios, H2O2 concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs.
NASA Technical Reports Server (NTRS)
Goodall, R. G.; Painter, G. W.
1975-01-01
Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.
Advanced sewage treatment process with excess sludge reduction and phosphorus recovery.
Saktaywin, W; Tsuno, H; Nagare, H; Soyama, T; Weerapakkaroon, J
2005-03-01
An advanced sewage treatment process has been developed, in which excess sludge reduction by ozonation and phosphorus recovery by crystallization process are incorporated to a conventional anaerobic/oxic (A/O) phosphorus removal process. The mathematical model was developed to describe the mass balance principal at a steady state of this process. Sludge ozonation experiments were carried out to investigate solubilization characteristics of sludge and change in microbial activity by using sludge cultured with feed of synthetic sewage under A/O process. Phosphorus was solubilized by ozonation as well as organics, and acid-hydrolyzable phosphorus (AHP) was the most part of solubilized phosphorus for phosphorus accumulating organisms (PAOs) containing sludge. At solubilization of 30%, around 70% of sludge was inactivated by ozonation. The results based on these studies indicated that the proposed process configuration has potential to reduce the excess sludge production as well as to recover phosphorus in usable forms. The system performance results show that this system is practical, in which 30% of solubilization degree was achieved by ozonation. In this study, 30% of solubilization was achieved at 30 mgO(3)/gSS of ozone consumption.
Advances of Ag, Cu, and Ag-Cu alloy nanoparticles synthesized via chemical reduction route
NASA Astrophysics Data System (ADS)
Tan, Kim Seah; Cheong, Kuan Yew
2013-04-01
Silver (Ag) and copper (Cu) nanoparticles have shown great potential in variety applications due to their excellent electrical and thermal properties resulting high demand in the market. Decreasing in size to nanometer scale has shown distinct improvement in these inherent properties due to larger surface-to-volume ratio. Ag and Cu nanoparticles are also shown higher surface reactivity, and therefore being used to improve interfacial and catalytic process. Their melting points have also dramatically decreased compared with bulk and thus can be processed at relatively low temperature. Besides, regularly alloying Ag into Cu to create Ag-Cu alloy nanoparticles could be used to improve fast oxidizing property of Cu nanoparticles. There are varieties methods have been reported on the synthesis of Ag, Cu, and Ag-Cu alloy nanoparticles. This review aims to cover chemical reduction means for synthesis of those nanoparticles. Advances of this technique utilizing different reagents namely metal salt precursors, reducing agents, and stabilizers, as well as their effects on respective nanoparticles have been systematically reviewed. Other parameters such as pH and temperature that have been considered as an important factor influencing the quality of those nanoparticles have also been reviewed thoroughly.
Yu, Hui; Nie, Er; Xu, Jun; Yan, Shuwen; Cooper, William J; Song, Weihua
2013-04-01
Many pharmaceutical compounds and metabolites are found in surface and ground waters suggesting their ineffective removal by conventional wastewater treatment technologies. Advanced oxidation/reduction processes (AO/RPs), which utilize free radical reactions to directly degrade chemical contaminants, are alternatives to traditional water treatment. This study reports the absolute rate constants for reaction of diclofenac sodium and model compound (2, 6-dichloraniline) with the two major AO/RP radicals: the hydroxyl radical (•OH) and hydrated electron (e(aq)(-)). The bimolecular reaction rate constants (M(-1) s(-1)) for diclofenac for •OH was (9.29 ± 0.11) × 10(9), and for e(-)(aq) was (1.53 ± 0.03) ×10(9). To provide a better understanding of the decomposition of the intermediate radicals produced by hydroxyl radical reactions, transient absorption spectra are observed from 1 - 250 μs. In addition, preliminary degradation mechanisms and major products were elucidated using (60)Co γ-irradiation and LC-MS. The toxicity of products was evaluated using luminescent bacteria. These data are required for both evaluating the potential use of AO/RPs for the destruction of these compounds and for studies of their fate and transport in surface waters where radical chemistry may be important in assessing their lifetime.
Noise-Reduction Benefits Analyzed for Over-the-Wing-Mounted Advanced Turbofan Engines
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2000-01-01
As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Also, the large fan diameters of modern engines with increasingly higher bypass ratios pose significant packaging and aircraft installation challenges. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large diameters and high bypass ratio cycles to continue, this approach allows the wing to shield much of the engine noise from people on the ground. The Propulsion Systems Analysis Office at the NASA Glenn Research Center at Lewis Field conducted independent analytical research to estimate the noise reduction potential of mounting advanced turbofan engines above the wing. Certification noise predictions were made for a notional long-haul commercial quadjet transport. A large quad was chosen because, even under current regulations, such aircraft sometimes experience difficulty in complying with certification noise requirements with a substantial margin. Also, because of its long wing chords, a large airplane would receive the greatest advantage of any noise-shielding benefit.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 3 2014-10-01 2014-10-01 false Cost-sharing limits, advance payments of premium tax credits, and cost-sharing reductions. 800.106 Section 800.106 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE OF PERSONNEL MANAGEMENT MULTI-STATE PLAN PROGRAM...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 3 2013-10-01 2013-10-01 false Cost-sharing limits, advance payments of premium tax credits, and cost-sharing reductions. 800.106 Section 800.106 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE OF PERSONNEL MANAGEMENT MULTI-STATE PLAN PROGRAM...
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Park, Tai Sun; Hong, Yoonki; Lee, Jae Seung; Oh, Sang Young; Lee, Sang Min; Kim, Namkug; Seo, Joon Beom; Oh, Yeon-Mok; Lee, Sang-Do; Lee, Sei Won
2015-01-01
Purpose Endobronchial valve (EBV) therapy is increasingly being seen as a therapeutic option for advanced emphysema, but its clinical utility in Asian populations, who may have different phenotypes to other ethnic populations, has not been assessed. Patients and methods This prospective open-label single-arm clinical trial examined the clinical efficacy and the safety of EBV in 43 consecutive patients (mean age 68.4±7.5, forced expiratory volume in 1 second [FEV1] 24.5%±10.7% predicted, residual volume 208.7%±47.9% predicted) with severe emphysema with complete fissure and no collateral ventilation in a tertiary referral hospital in Korea. Results Compared to baseline, the patients exhibited significant improvements 6 months after EBV therapy in terms of FEV1 (from 0.68±0.26 L to 0.92±0.40 L; P<0.001), 6-minute walk distance (from 233.5±114.8 m to 299.6±87.5 m; P=0.012), modified Medical Research Council dyspnea scale (from 3.7±0.6 to 2.4±1.2; P<0.001), and St George’s Respiratory Questionnaire (from 65.59±13.07 to 53.76±11.40; P=0.028). Nine patients (20.9%) had a tuberculosis scar, but these scars did not affect target lobe volume reduction or pneumothorax frequency. Thirteen patients had adverse events, ten (23.3%) developed pneumothorax, which included one death due to tension pneumothorax. Conclusion EBV therapy was as effective and safe in Korean patients as it has been shown to be in Western countries. (Trial registration: ClinicalTrials.gov: NCT01869205). PMID:26251590
Wang, Zhi-Hua; Zhou, Jun-Hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-Ren; Cen, Ke-Fa
2005-03-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15% approximately 25% reburn heat input, temperature range from 1100 degrees C to 1400 degrees C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 degrees C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 degrees C approximately 1100 degrees C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NO(x) Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures.
Sampling Errors of Variance Components.
ERIC Educational Resources Information Center
Sanders, Piet F.
A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…
Sorge, J.N.; Menzies, B.; Smouse, S.M.; Stallings, J.W.
1995-09-01
Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide NOx emissions from coal-fired boilers. The primary objective of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control/optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.
Recent advances in membrane bio-technologies for sludge reduction and treatment.
Wang, Zhiwei; Yu, Hongguang; Ma, Jinxing; Zheng, Xiang; Wu, Zhichao
2013-12-01
This paper is designed to critically review the recent developments of membrane bio-technologies for sludge reduction and treatment by covering process fundamentals, performances (sludge reduction efficiency, membrane fouling, pollutant removal, etc.) and key operational parameters. The future perspectives of the hybrid membrane processes for sludge reduction and treatment are also discussed. For sludge reduction using membrane bioreactors (MBRs), literature review shows that biological maintenance metabolism, predation on bacteria, and uncoupling metabolism through using oxic-settling-anaerobic (OSA) process are promising ways that can be employed in full-scale applications. Development of control methods for worm proliferation is in great need of, and a good sludge reduction and MBR performance can be expected if worm growth is properly controlled. For lysis-cryptic sludge reduction method, improvement of oxidant dispersion and increase of the interaction with sludge cells can enhance the lysis efficiency. Green uncoupler development might be another research direction for uncoupling metabolism in MBRs. Aerobic hybrid membrane system can perform well for sludge thickening and digestion in small- and medium-sized wastewater treatment plants (WWTPs), and pilot-scale/full-scale applications have been reported. Anaerobic membrane digestion (AMD) process is a very competitive technology for sludge stabilization and digestion. Use of biogas recirculation for fouling control can be a powerful way to decrease the energy requirements for AMD process. Future research efforts should be dedicated to membrane preparation for high biomass applications, process optimization, and pilot-scale/full-scale tracking research in order to push forward the real and wide applications of the hybrid membrane systems for sludge minimization and treatment.
Roden, E.E.; Urrutia, M.M.
1997-07-01
'The authors have made considerable progress toward a number of project objectives during the first several months of activity on the project. An exhaustive analysis was made of the growth rate and biomass yield (both derived from measurements of cell protein production) of two representative strains of Fe(III)-reducing bacteria (Shewanellaalga strain BrY and Geobactermetallireducens) growing with different forms of Fe(III) as an electron acceptor. These two fundamentally different types of Fe(III)-reducing bacteria (FeRB) showed comparable rates of Fe(III) reduction, cell growth, and biomass yield during reduction of soluble Fe(III)-citrate and solid-phase amorphous hydrous ferric oxide (HFO). Intrinsic growth rates of the two FeRB were strongly influenced by whether a soluble or a solid-phase source of Fe(III) was provided: growth rates on soluble Fe(III) were 10--20 times higher than those on solid-phase Fe(III) oxide. Intrinsic FeRB growth rates were comparable during reduction of HF0 and a synthetic crystalline Fe(III) oxide (goethite). A distinct lag phase for protein production was observed during the first several days of incubation in solid-phase Fe(III) oxide medium, even though Fe(III) reduction proceeded without any lag. No such lag between protein production and Fe(III) reduction was observed during growth with soluble Fe(III). This result suggested that protein synthesis coupled to solid-phase Fe(III) oxide reduction in batch culture requires an initial investment of energy (generated by Fe(III) reduction), which is probably needed for synthesis of materials (e.g. extracellular polysaccharides) required for attachment of the cells to oxide surfaces. This phenomenon may have important implications for modeling the growth of FeRB in subsurface sedimentary environments, where attachment and continued adhesion to solid-phase materials will be required for maintenance of Fe(III) reduction activity. Despite considerable differences in the rate and pattern
NASA Astrophysics Data System (ADS)
Chandrashekar, Anand; Chen, Feng; Lin, Jasmine; Humayun, Raashina; Wongsenakhum, Panya; Chang, Sean; Danek, Michal; Itou, Takamasa; Nakayama, Tomoo; Kariya, Atsushi; Kawaguchi, Masazumi; Hizume, Shunichi
2010-09-01
This paper describes electrical testing results of new tungsten chemical vapor deposition (CVD-W) process concepts that were developed to address the W contact and bitline scaling issues on 55 nm node devices. Contact resistance (Rc) measurements in complementary metal oxide semiconductor (CMOS) devices indicate that the new CVD-W process for sub-32 nm and beyond - consisting of an advanced pulsed nucleation layer (PNL) combined with low resistivity tungsten (LRW) initiation - produces a 20-30% drop in Rc for diffused NiSi contacts. From cross-sectional bright field and dark field transmission electron microscopy (TEM) analysis, such Rc improvement can be attributed to improved plugfill and larger in-feature W grain size with the advanced PNL+LRW process. More experiments that measured contact resistance for different feature sizes point to favorable Rc scaling with the advanced PNL+LRW process. Finally, 40% improvement in line resistance was observed with this process as tested on 55 nm embedded dynamic random access memory (DRAM) devices, confirming that the advanced PNL+LRW process can be an effective metallization solution for sub-32 nm devices.
ERIC Educational Resources Information Center
Marincean, Simona; Smith, Sheila R.; Fritz, Michael; Lee, Byung Joo; Rizk, Zeinab
2012-01-01
An upper-division laboratory project has been developed as a collaborative investigation of a reaction routinely taught in organic chemistry courses: the reduction of carbonyl compounds by borohydride reagents. Determination of several trends regarding structure-activity relationship was possible because each student contributed his or her results…
Stereospecific Reductions of Delta4-Cholesten-3-one: An Advanced Organic Synthesis Project.
ERIC Educational Resources Information Center
Markgraf, J. Hodge; And Others
1988-01-01
Outlines a multistep project involving oxidation of cholesterol, isomerization of an enone, and reduction of delta-4-cholesten-3-one. Featured is the last stage in which the ring junction is set stereospecifically. Recommends two laboratory periods to complete the reaction. (ML)
Zhang, Shihan; Chen, Han; Xia, Yinfeng; Liu, Nan; Lu, Bi-Hong; Li, Wei
2014-10-01
Anthropogenic nitrogen oxides (NO x ) emitted from the fossil-fuel-fired power plants cause adverse environmental issues such as acid rain, urban ozone smoke, and photochemical smog. A novel chemical absorption-biological reduction (CABR) integrated process under development is regarded as a promising alternative to the conventional selective catalytic reduction processes for NO x removal from the flue gas because it is economic and environmentally friendly. CABR process employs ferrous ethylenediaminetetraacetate [Fe(II)EDTA] as a solvent to absorb the NO x following microbial denitrification of NO x to harmless nitrogen gas. Meanwhile, the absorbent Fe(II)EDTA is biologically regenerated to sustain the adequate NO x removal. Compared with conventional denitrification process, CABR not only enhances the mass transfer of NO from gas to liquid phase but also minimize the impact of oxygen on the microorganisms. This review provides the current advances of the development of the CABR process for NO x removal from the flue gas.
NASA Technical Reports Server (NTRS)
Hughes, Christoper E.; Gazzaniga, John A.
2013-01-01
A wind tunnel experiment was conducted in the NASA Glenn Research Center anechoic 9- by 15-Foot Low-Speed Wind Tunnel to investigate two new advanced noise reduction technologies in support of the NASA Fundamental Aeronautics Program Subsonic Fixed Wing Project. The goal of the experiment was to demonstrate the noise reduction potential and effect on fan model performance of the two noise reduction technologies in a scale model Ultra-High Bypass turbofan at simulated takeoff and approach aircraft flight speeds. The two novel noise reduction technologies are called Over-the-Rotor acoustic treatment and Soft Vanes. Both technologies were aimed at modifying the local noise source mechanisms of the fan tip vortex/fan case interaction and the rotor wake-stator interaction. For the Over-the-Rotor acoustic treatment, two noise reduction configurations were investigated. The results showed that the two noise reduction technologies, Over-the-Rotor and Soft Vanes, were able to reduce the noise level of the fan model, but the Over-the-Rotor configurations had a significant negative impact on the fan aerodynamic performance; the loss in fan aerodynamic efficiency was between 2.75 to 8.75 percent, depending on configuration, compared to the conventional solid baseline fan case rubstrip also tested. Performance results with the Soft Vanes showed that there was no measurable change in the corrected fan thrust and a 1.8 percent loss in corrected stator vane thrust, which resulted in a total net thrust loss of approximately 0.5 percent compared with the baseline reference stator vane set.
Littleton, Harry; Griffin, John
2011-07-31
This project was a subtask of Energy Saving Melting and Revert Reduction Technology (Energy SMARRT) Program. Through this project, technologies, such as computer modeling, pattern quality control, casting quality control and marketing tools, were developed to advance the Lost Foam Casting process application and provide greater energy savings. These technologies have improved (1) production efficiency, (2) mechanical properties, and (3) marketability of lost foam castings. All three reduce energy consumption in the metals casting industry. This report summarizes the work done on all tasks in the period of January 1, 2004 through June 30, 2011. Current (2011) annual energy saving estimates based on commercial introduction in 2011 and a market penetration of 97% by 2020 is 5.02 trillion BTU's/year and 6.46 trillion BTU's/year with 100% market penetration by 2023. Along with these energy savings, reduction of scrap and improvement in casting yield will result in a reduction of the environmental emissions associated with the melting and pouring of the metal which will be saved as a result of this technology. The average annual estimate of CO2 reduction per year through 2020 is 0.03 Million Metric Tons of Carbon Equivalent (MM TCE).
NASA Technical Reports Server (NTRS)
Lemanski, A. J.
1976-01-01
Helicopter drive-system technology which would result in the largest benefit in direct maintenance cost when applied to civil helicopters in the 1980 timeframe was developed. A prototype baseline drive system based on 1975 technology provided the basis for comparison against the proposed advanced technology in order to determine the potential for each area recommended for improvement. A specific design example of an advanced-technology main transmission is presented to define improvements for maintainability, weight, producibility, reliability, noise, vibration, and diagnostics. Projections of the technology achievable in the 1980 timeframe are presented. Based on this data, the technologies with the highest payoff (lowest direct maintenance cost) for civil-helicopter drive systems are identified.
External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Niedra, Janis M.; Geng, Steven M.
2013-01-01
Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.
ADVANCEMENT OF NUCLEIC ACID-BASED TOOLS FOR MONITORING IN SITU REDUCTIVE DECHLORINATION
Vangelas, K; ELIZABETH EDWARDS, E; FRANK LOFFLER, F; Brian02 Looney, B
2006-11-17
Regulatory protocols generally recognize that destructive processes are the most effective mechanisms that support natural attenuation of chlorinated solvents. In many cases, these destructive processes will be biological processes and, for chlorinated compounds, will often be reductive processes that occur under anaerobic conditions. The existing EPA guidance (EPA, 1998) provides a list of parameters that provide indirect evidence of reductive dechlorination processes. In an effort to gather direct evidence of these processes, scientists have identified key microorganisms and are currently developing tools to measure the abundance and activity of these organisms in subsurface systems. Drs. Edwards and Luffler are two recognized leaders in this field. The research described herein continues their development efforts to provide a suite of tools to enable direct measures of biological processes related to the reductive dechlorination of TCE and PCE. This study investigated the strengths and weaknesses of the 16S rRNA gene-based approach to characterizing the natural attenuation capabilities in samples. The results suggested that an approach based solely on 16S rRNA may not provide sufficient information to document the natural attenuation capabilities in a system because it does not distinguish between strains of organisms that have different biodegradation capabilities. The results of the investigations provided evidence that tools focusing on relevant enzymes for functionally desired characteristics may be useful adjuncts to the 16SrRNA methods.
NASA Astrophysics Data System (ADS)
Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali
2017-02-01
One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.
Armor Possibilities and Radiographic Blur Reduction for The Advanced Hydrotest Facility
Hackett, M
2001-09-01
Currently at Lawrence Livermore National Laboratory (LLNL) a composite firing vessel is under development for the Advanced Hydrotest Facility (AHF) to study high explosives. This vessel requires a shrapnel mitigating layer to protect the vessel during experiments. The primary purpose of this layer is to protect the vessel, yet the material must be transparent to proton radiographs. Presented here are methods available to collect data needed before selection, along with a comparison tool developed to aid in choosing a material that offers the best of ballistic protection while allowing for clear radiographs.
NASA Astrophysics Data System (ADS)
Satake, Kenji
2014-12-01
The December 2004 Indian Ocean tsunami was the worst tsunami disaster in the world's history with more than 200,000 casualties. This disaster was attributed to giant size (magnitude M ~ 9, source length >1000 km) of the earthquake, lacks of expectation of such an earthquake, tsunami warning system, knowledge and preparedness for tsunamis in the Indian Ocean countries. In the last ten years, seismology and tsunami sciences as well as tsunami disaster risk reduction have significantly developed. Progress in seismology includes implementation of earthquake early warning, real-time estimation of earthquake source parameters and tsunami potential, paleoseismological studies on past earthquakes and tsunamis, studies of probable maximum size, recurrence variability, and long-term forecast of large earthquakes in subduction zones. Progress in tsunami science includes accurate modeling of tsunami source such as contribution of horizontal components or "tsunami earthquakes", development of new types of offshore and deep ocean tsunami observation systems such as GPS buoys or bottom pressure gauges, deployments of DART gauges in the Pacific and other oceans, improvements in tsunami propagation modeling, and real-time inversion or data assimilation for the tsunami warning. These developments have been utilized for tsunami disaster reduction in the forms of tsunami early warning systems, tsunami hazard maps, and probabilistic tsunami hazard assessments. Some of the above scientific developments helped to reveal the source characteristics of the 2011 Tohoku earthquake, which caused devastating tsunami damage in Japan and Fukushima Dai-ichi Nuclear Power Station accident. Toward tsunami disaster risk reduction, interdisciplinary and trans-disciplinary approaches are needed for scientists with other stakeholders.
Advanced Glycation End Products in Foods and a Practical Guide to Their Reduction in the Diet
URIBARRI, JAIME; WOODRUFF, SANDRA; GOODMAN, SUSAN; CAI, WEIJING; CHEN, XUE; PYZIK, RENATA; YONG, ANGIE; STRIKER, GARY E.; VLASSARA, HELEN
2013-01-01
Modern diets are largely heat-processed and as a result contain high levels of advanced glycation end products (AGEs). Dietary advanced glycation end products (dAGEs) are known to contribute to increased oxidant stress and inflammation, which are linked to the recent epidemics of diabetes and cardiovascular disease. This report significantly expands the available dAGE database, validates the dAGE testing methodology, compares cooking procedures and inhibitory agents on new dAGE formation, and introduces practical approaches for reducing dAGE consumption in daily life. Based on the findings, dry heat promotes new dAGE formation by >10- to 100-fold above the uncooked state across food categories. Animal-derived foods that are high in fat and protein are generally AGE-rich and prone to new AGE formation during cooking. In contrast, carbohydrate-rich foods such as vegetables, fruits, whole grains, and milk contain relatively few AGEs, even after cooking. The formation of new dAGEs during cooking was prevented by the AGE inhibitory compound aminoguanidine and significantly reduced by cooking with moist heat, using shorter cooking times, cooking at lower temperatures, and by use of acidic ingredients such as lemon juice or vinegar. The new dAGE database provides a valuable instrument for estimating dAGE intake and for guiding food choices to reduce dAGE intake. PMID:20497781
VPSim: Variance propagation by simulation
Burr, T.; Coulter, C.A.; Prommel, J.
1997-12-01
One of the fundamental concepts in a materials control and accountability system for nuclear safeguards is the materials balance (MB). All transfers into and out of a material balance area are measured, as are the beginning and ending inventories. The resulting MB measures the material loss, MB = T{sub in} + I{sub B} {minus} T{sub out} {minus} I{sub E}. To interpret the MB, the authors must estimate its measurement error standard deviation, {sigma}{sub MB}. When feasible, they use a method usually known as propagation of variance (POV) to estimate {sigma}{sub MB}. The application of POV for estimating the measurement error variance of an MB is straightforward but tedious. By applying POV to individual measurement error standard deviations they can estimate {sigma}{sub MB} (or more generally, they can estimate the variance-covariance matrix, {Sigma}, of a sequence of MBs). This report describes a new computer program (VPSim) that uses simulation to estimate the {Sigma} matrix of a sequence of MBs. Given the proper input data, VPSim calculates the MB and {sigma}{sub MB}, or calculates a sequence of n MBs and the associated n-by-n covariance matrix, {Sigma}. The covariance matrix, {Sigma}, contains the variance of each MB in the diagonal entries and the covariance between pairs of MBs in the off-diagonal entries.
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…
NASA Technical Reports Server (NTRS)
Wagenknecht, C. D.; Bediako, E. D.
1985-01-01
Advanced Supersonic Transport jet noise may be reduced to Federal Air Regulation limits if recommended refinements to a recently developed ejector shroud exhaust system are successfully carried out. A two-part program consisting of a design study and a subscale model wind tunnel test effort conducted to define an acoustically treated ejector shroud exhaust system for supersonic transport application is described. Coannular, 20-chute, and ejector shroud exhaust systems were evaluated. Program results were used in a mission analysis study to determine aircraft takeoff gross weight to perform a nominal design mission, under Federal Aviation Regulation (1969), Part 36, Stage 3 noise constraints. Mission trade study results confirmed that the ejector shroud was the best of the three exhaust systems studied with a significant takeoff gross weight advantage over the 20-chute suppressor nozzle which was the second best.
Vibration reduction in advanced composite turbo-fan blades using embedded damping materials
NASA Astrophysics Data System (ADS)
Kosmatka, John B.; Lapid, Alex J.; Mehmed, Oral
1996-05-01
A preliminary design and analysis procedure for locating an integral damping treatment in composite turbo-propeller blades has been developed. This finite element based approach, which is based upon the modal strain energy method, is used to size and locate the damping material patch so that the damping (loss factor) is maximized in a particular mode while minimizing the overall stiffness loss (minimal reductions in the structural natural frequencies). Numerical results are presented to illustrate the variation in the natural frequencies and damping levels as a result of stacking sequence, integral damping patch size and location, and border materials. Experimental studies were presented using flat and pretwisted (30 degrees) integrally damped composite blade-like structures to show how a small internal damping patch can significantly increase the damping levels without sacrificing structural integrity. Moreover, the use of a soft border material around the patch can greatly increase the structural damping levels.
NASA Astrophysics Data System (ADS)
Wu, Renbing; Xue, Yanhong; Liu, Bo; Zhou, Kun; Wei, Jun; Chan, Siew Hwa
2016-10-01
Highly efficient and cost-effective electrocatalyst for the oxygen reduction reaction (ORR) is crucial for a variety of renewable energy applications. Herein, strongly coupled hybrid composites composed of cobalt diselenide (CoSe2) nanoparticles embedded within graphitic carbon polyhedra (GCP) as high-performance ORR catalyst have been rationally designed and synthesized. The catalyst is fabricated by a convenient method, which involves the simultaneous pyrolysis and selenization of preformed Co-based zeolitic imidazolate framework (ZIF-67). Benefiting from the unique structural features, the resulting CoSe2/GCP hybrid catalyst shows high stability and excellent electrocatalytic activity towards ORR (the onset and half-wave potentials are 0.935 and 0.806 V vs. RHE, respectively), which is superior to the state-of-the-art commercial Pt/C catalyst (0.912 and 0.781 V vs. RHE, respectively).
NASA Technical Reports Server (NTRS)
Rao, D. M.; Goglia, G. L.
1981-01-01
Accomplishments in vortex flap research are summarized. A singular feature of the vortex flap is that, throughout the range of angle of attack range, the flow type remains qualitatively unchanged. Accordingly, no large or sudden change in the aerodynamic characteristics, as happens when forcibly maintained attached flow suddenly reverts to separation, will occur with the vortex flap. Typical wind tunnel test data are presented which show the drag reduction potential of the vortex flap concept applied to a supersonic cruise airplane configuration. The new technology offers a means of aerodynamically augmenting roll-control effectiveness on slender wings at higher angles of attack by manipulating the vortex flow generated from leading edge separation. The proposed manipulator takes the form of a flap hinged at or close to the leading edge, normally retracted flush with the wing upper surface to conform to the airfoil shape.
ADVANCED BYPRODUCT RECOVERY: DIRECT CATALYTIC REDUCTION OF SO2 TO ELEMENTAL SULFUR
Robert S. Weber
1999-05-01
Arthur D. Little, Inc., together with its commercialization partner, Engelhard Corporation, and its university partner Tufts, investigated a single-step process for direct, catalytic reduction of sulfur dioxide from regenerable flue gas desulfurization processes to the more valuable elemental sulfur by-product. This development built on recently demonstrated SO{sub 2}-reduction catalyst performance at Tufts University on a DOE-sponsored program and is, in principle, applicable to processing of regenerator off-gases from all regenerable SO{sub 2}-control processes. In this program, laboratory-scale catalyst optimization work at Tufts was combined with supported catalyst formulation work at Engelhard, bench-scale supported catalyst testing at Arthur D. Little and market assessments, also by Arthur D. Little. Objectives included identification and performance evaluation of a catalyst which is robust and flexible with regard to choice of reducing gas. The catalyst formulation was improved significantly over the course of this work owing to the identification of a number of underlying phenomena that tended to reduce catalyst selectivity. The most promising catalysts discovered in the bench-scale tests at Tufts were transformed into monolith-supported catalysts at Engelhard. These catalyst samples were tested at larger scale at Arthur D. Little, where the laboratory-scale results were confirmed, namely that the catalysts do effectively reduce sulfur dioxide to elemental sulfur when operated under appropriate levels of conversion and in conditions that do not contain too much water or hydrogen. Ways to overcome those limitations were suggested by the laboratory results. Nonetheless, at the end of Phase I, the catalysts did not exhibit the very stringent levels of activity or selectivity that would have permitted ready scale-up to pilot or commercial operation. Therefore, we chose not to pursue Phase II of this work which would have included further bench-scale testing
Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe
2015-12-01
Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering.
Nomura, N; Yamada, A; Saitou, F; Tsuzawa, T; Yamashita, I; Sakakibara, T; Shimizu, T; Sakamoto, T; Karaki, Y; Tazawa, K
1994-05-01
A 54-year-old man was diagnosed with Borr 1 type gastric cancer, located just below ECJ with some paraaortic lymph node metastase, during treatment of diabetes mellitus at another hospital. He underwent spleno-total gastrectomy for reduction. The metastatic lymph nodes of the para-aorta were not resected, so the surgery was considered palliative. We administered FTP chemotherapy (CDDP 110 mg/day 1, 5-FU 1,200 mg/day 1-5, THP-ADM 30 mg/day 1) 5 times following surgery. The metastatic lymph nodes were remarkably decreased in size by the initial treatment. The decrement was 52.4% after the initial treatment (PR). After the 4th treatment, there were no lymph nodes detected (CR). After the 5th treatment, CR continued. The PR period was considered to be 5 months, and that of CR 4 months. The patient has no renal or heart dysfunction, and no suppression of bone marrow. His quality of life is satisfactory, and he continues to work as prior to surgery. FTP chemotherapy is considered a successful regimen for postoperative chemotherapy.
2014 U.S. Offshore Wind Market Report: Industry Trends, Technology Advancement, and Cost Reduction
Smith, Aaron; Stehly, Tyler; Walter Musial
2015-09-29
2015 has been an exciting year for the U.S. offshore wind market. After more than 15 years of development work, the U.S. has finally hit a crucial milestone; Deepwater Wind began construction on the 30 MW Block Island Wind Farm (BIWF) in April. A number of other promising projects, however, have run into economic, legal, and political headwinds, generating much speculation about the future of the industry. This slow, and somewhat painful, start to the industry is not without precedent; each country in northern Europe began with pilot-scale, proof-of-concept projects before eventually moving to larger commercial scale installations. Now, after more than a decade of commercial experience, the European industry is set to achieve a new deployment record, with more than 4 GW expected to be commissioned in 2015, with demonstrable progress towards industry-wide cost reduction goals. DWW is leveraging 25 years of European deployment experience; the BIWF combines state-of-the-art technologies such as the Alstom 6 MW turbine with U.S. fabrication and installation competencies. The successful deployment of the BIWF will provide a concrete showcase that will illustrate the potential of offshore wind to contribute to state, regional, and federal goals for clean, reliable power and lasting economic development. It is expected that this initial project will launch the U.S. industry into a phase of commercial development that will position offshore wind to contribute significantly to the electric systems in coastal states by 2030.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Advanced noise reduction in placental ultrasound imaging using CPU and GPU: a comparative study
NASA Astrophysics Data System (ADS)
Zombori, G.; Ryan, J.; McAuliffe, F.; Rainford, L.; Moran, M.; Brennan, P.
2010-03-01
This paper presents a comparison of different implementations of 3D anisotropic diffusion speckle noise reduction technique on ultrasound images. In this project we are developing a novel volumetric calcification assessment metric for the placenta, and providing a software tool for this purpose. The tool can also automatically segment and visualize (in 3D) ultrasound data. One of the first steps when developing such a tool is to find a fast and efficient way to eliminate speckle noise. Previous works on this topic by Duan, Q. [1] and Sun, Q. [2] have proven that the 3D noise reducing anisotropic diffusion (3D SRAD) method shows exceptional performance in enhancing ultrasound images for object segmentation. Therefore we have implemented this method in our software application and performed a comparative study on the different variants in terms of performance and computation time. To increase processing speed it was necessary to utilize the full potential of current state of the art Graphics Processing Units (GPUs). Our 3D datasets are represented in a spherical volume format. With the aim of 2D slice visualization and segmentation, a "scan conversion" or "slice-reconstruction" step is needed, which includes coordinate transformation from spherical to Cartesian, re-sampling of the volume and interpolation. Combining the noise filtering and slice reconstruction in one process on the GPU, we can achieve close to real-time operation on high quality data sets without the need for down-sampling or reducing image quality. For the GPU programming OpenCL language was used. Therefore the presented solution is fully portable.
Is lung volume reduction surgery effective in the treatment of advanced emphysema?
Zahid, Imran; Sharif, Sumera; Routledge, Tom; Scarci, Marco
2011-03-01
A best evidence topic in thoracic surgery was written according to a structured protocol. The question addressed was whether lung volume reduction surgery (LVRS) might be superior to medical treatment in the management of patients with severe emphysema. Overall 497 papers were found using the reported search, of which 12 represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results are tabulated. We conclude that LVRS produces superior patient outcomes compared to medical treatment in terms of exercise capacity, lung function, quality of life and long-term (>1 year postoperative) survival. A large proportion of the best evidence on this topic is based on analysis of the National Emphysema Treatment Trial (NETT). Seven studies compared LVRS to medical treatment alone (MTA) using data generated by the NETT trial. They found higher quality of life scores (45.3 vs. 27.5, P<0.001), improved maximum ventilation (32.8 vs. 29.6 l/min, P=0.001) and lower exacerbation rate per person-year (0.27 vs. 0.37%, P=0.0005) with LVRS than MTA. Mortality rates for LVRS were greater up to one year (P=0.01), equivalent by three years (P=0.15) and lower after four years (P=0.06) postoperative compared to MTA. Patients with upper-lobe-predominant disease and low exercise capacity (0.36 vs. 0.54, P=0.003) benefited the most from undergoing LVRS rather than MTA in terms of probability of death at five years compared to patients with non-upper-lobe disease (0.38 vs. 0.45, P=0.03) or upper-lobe-disease with high exercise capacity (0.33 vs. 0.38, P=0.32). Five studies compared LVRS to MTA using data independent from the NETT trial. They found greater six-minute walking distances (433 vs. 300 m, P<0.002), improved total lung capacity (18.8 vs. 7.9% predicted, P<0.02) and quality of life scores (47 vs. 23.2, P<0.05) with LVRS compared to MTA. Even though LVRS has a much
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.
Berroya, Renato B.; Escano, Fernando B.
1972-01-01
This report deals with a rare complication of disc-valve prosthesis in the mitral area. A significant disc poppet and struts destruction of mitral Beall valve prostheses occurred 20 and 17 months after implantation. The resulting valve incompetence in the first case contributed to the death of the patient. The durability of Teflon prosthetic valves appears to be in question and this type of valve probably will be unacceptable if there is an increasing number of disc-valve variance in the future. Images PMID:5017573
Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.; Menzies, W.R.; Smouse, S.M.; Stallings, J.W.
1997-12-31
This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.
NASA Astrophysics Data System (ADS)
Pesenson, Meyer; Pesenson, I. Z.; McCollum, B.
2009-05-01
The complexity of multitemporal/multispectral astronomical data sets together with the approaching petascale of such datasets and large astronomical surveys require automated or semi-automated methods for knowledge discovery. Traditional statistical methods of analysis may break down not only because of the amount of data, but mostly because of the increase of the dimensionality of data. Image fusion (combining information from multiple sensors in order to create a composite enhanced image) and dimension reduction (finding lower-dimensional representation of high-dimensional data) are effective approaches to "the curse of dimensionality,” thus facilitating automated feature selection, classification and data segmentation. Dimension reduction methods greatly increase computational efficiency of machine learning algorithms, improve statistical inference and together with image fusion enable effective scientific visualization (as opposed to mere illustrative visualization). The main approach of this work utilizes recent advances in multidimensional image processing, as well as representation of essential structure of a data set in terms of its fundamental eigenfunctions, which are used as an orthonormal basis for the data visualization and analysis. We consider multidimensional data sets and images as manifolds or combinatorial graphs and construct variational splines that minimize certain Sobolev norms. These splines allow us to reconstruct the eigenfunctions of the combinatorial Laplace operator by using only a small portion of the graph. We use the first two or three eigenfunctions for embedding large data sets into two- or three-dimensional Euclidean space. Such reduced data sets allow efficient data organization, retrieval, analysis and visualization. We demonstrate applications of the algorithms to test cases from the Spitzer Space Telescope. This work was carried out with funding from the National Geospatial-Intelligence Agency University Research Initiative
Weinstein, R.E.; Tonnemacher, G.C.
1999-07-01
The Clinton Administration signed the 1997 Kyoto Protocol agreement that would limit US greenhouse gas emissions, of which carbon dioxide (CO{sub 2}) is the most significant. While the Kyoto Protocol has not yet been submitted to the Senate for ratification, in the past, there have been few proposed environmental actions that had continued and wide-spread attention of the press and environmental activists that did not eventually lead to regulation. Since the Kyoto Protocol might lead to future regulation, its implications need investigation by the power industry. Limiting CO{sub 2} emissions affects the ability of the US to generate reliable, low cost electricity, and has tremendous potential impact on electric generating companies with a significant investment in coal-fired generation, and on their customers. This paper explores the implications of reducing coal plant CO{sub 2} by various amounts. The amount of reduction for the US that is proposed in the Kyoto Protocol is huge. The Kyoto Protocol would commit the US to reduce its CO{sub 2} emissions to 7% below 1990 levels. Since 1990, there has been significant growth in US population and the US economy driving carbon emissions 34% higher by year 2010. That means CO{sub 2} would have to be reduced by 30.9%, which is extremely difficult to accomplish. The paper tells why. There are, however, coal-based technologies that should be available in time to make significant reductions in coal-plant CO{sub 2} emissions. Th paper focuses on one plant repowering method that can reduce CO{sub 2} per kWh by 25%, advanced circulating pressurized fluidized bed combustion combined cycle (APFBC) technology, based on results from a recent APFBC repowering concept evaluation of the Carolina Power and Light Company's (CP and L) L.V. Sutton steam station. The replacement of the existing 50-year base of power generating units needed to meet proposed Kyoto Protocol CO{sub 2} reduction commitments would be a massive undertaking. It is
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
Speed Variance and Its Influence on Accidents.
ERIC Educational Resources Information Center
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Global variance reduction for Monte Carlo reactor physics calculations
Zhang, Q.; Abdel-Khalik, H. S.
2013-07-01
Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 10{sup 3}-10{sup 5} times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)
Delivery Time Variance Reduction in the Military Supply Chain
2010-03-01
Validation, Delivery Time Standard Deviation by APOD ............................... 52 Table 11: Full Factorial Design Levels...note several instances of automotive manufacturers that fine suppliers for untimely deliveries. For example “ Saturn levies fines of $500 per minute...transportable combat equipment, to include bulky items such as the 74 ton Mobile Scissors Bridge. It has both forward and aft full size doors to
Feasibility Study of Variance Reduction in the Logistics Composite Model
2007-03-01
583.113861 1.7 Ybar 1.7219048 91 C15 – Multiple Controls (cont.) Y - Y (bar) product5 product10 product14 product15 product20 sq 5 sq 10 sq 14...simulation called Yμ for which Y is an estimator. Also, assume there is another output variable, X, that is correlated with the Y response and has an...expected value Xμ that is known. Since X is correlated with the Y variable, it is known as the control variable. Now consider the controlled
ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1
Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.; Ibrahim, Ahmad M.; Daily, Charles R.; Evans, Thomas M.; Wagner, John C.; Johnson, Jeffrey O.; Grove, Robert E.
2015-08-01
The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.
Chapter 10: A Hilbert Space Approach To Variance Reduction
2005-11-16
text are presented in Avellaneda et al. (2001), and in Avellaneda and Gamba (2000). Consider the standard CV setting: (Y1,X1), . . . , (Yn,Xn) are...imization objective; this is the subject of Avellaneda and Gamba (2000), and Avellaneda et al. (2001). The important case of f(w) = w2 is considered next... Avellaneda , M., Buff, R., Friedman, C., Grandchamp, N., Kruk, L., Newman, J., 2001. Weighted Monte Carlo: A new technique for calibrating asset- pricing
Variance Reduction for Quantile Estimates in Simulations Via Nonlinear Controls
1990-04-01
linear control depends upon the correlation between the statistic of interest and the control, which is often low. Since statistics often have a nonlinear...interest and the control reduces the effectiveness of the nonlinear control to that of the linear control . However, the data has to be sectioned to
Increasing selection response by Bayesian modeling of heterogeneous environmental variances
Technology Transfer Automated Retrieval System (TEKTRAN)
Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...
Restricted sample variance reduces generalizability.
Lakes, Kimberley D
2013-06-01
One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Generalized analysis of molecular variance.
Nievergelt, Caroline M; Libiger, Ondrej; Schork, Nicholas J
2007-04-06
Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA) strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA), requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms) or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by using it to analyze a
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Pillard, Paul; Livet, Veronique; Cabon, Quentin; Bismuth, Camille; Sonet, Juliette; Remy, Denise; Fau, Didier; Carozzo, Claude; Viguier, Eric; Cachon, Thibaut
2016-12-01
OBJECTIVE To evaluate the validity of 2 radiographic methods for measurement of the tibial tuberosity advancement distance required to achieve a reduction in patellar tendon-tibial plateau angle (PTA) to the ideal 90° in dogs by use of the modified Maquet technique (MMT). SAMPLE 24 stifle joints harvested from 12 canine cadavers. PROCEDURES Radiographs of stifle joints placed at 135° in the true lateral position were used to measure the required tibial tuberosity advancement distance with the conventional (A(M)) and correction (A(E)) methods. The MMT was used to successively advance the tibial crest to A(M) and A(E). Postoperative PTA was measured on a mediolateral radiograph for each advancement measurement method. If none of the measurements were close to 90°, the advancement distance was modified until the PTA was equal to 90° within 0.1°, and the true advancement distance (TA) was measured. Results were used to determine the optimal commercially available size of cage implant that would be used in a clinical situation. RESULTS Median A(M) and A(E) were 10.6 mm and 11.5 mm, respectively. Mean PTAs for the conventional and correction methods were 93.4° and 92.3°, respectively, and differed significantly from 90°. Median TA was 13.5 mm. The A(M) and A(E) led to the same cage size recommendations as for TA for only 1 and 4 stifle joints, respectively. CONCLUSIONS AND CLINICAL RELEVANCE Both radiographic methods of measuring the distance required to advance the tibial tuberosity in dogs led to an under-reduction in postoperative PTA when the MMT was used. A new, more accurate radiographic method needs to be developed.
Code of Federal Regulations, 2011 CFR
2011-10-01
... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...
Code of Federal Regulations, 2014 CFR
2014-10-01
... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...
Code of Federal Regulations, 2012 CFR
2012-10-01
... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...
Code of Federal Regulations, 2010 CFR
2010-10-01
... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...
Variance Design and Air Pollution Control
ERIC Educational Resources Information Center
Ferrar, Terry A.; Brownstein, Alan B.
1975-01-01
Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2012 CFR
2012-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
Nonlinear Epigenetic Variance: Review and Simulations
ERIC Educational Resources Information Center
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
This document provides assistance to those seeking to submit a variance request for LDR treatability variances and determinations of equivalent treatment regarding the hazardous waste land disposal restrictions program.
Functional analysis of variance for association studies.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Neural field theory with variance dynamics.
Robinson, P A
2013-06-01
Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.
Variance estimation for stratified propensity score estimators.
Williamson, E J; Morley, R; Lucas, A; Carpenter, J R
2012-07-10
Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.
Bierbaum, S; Öller, H-J; Kersten, A; Klemenčič, A Krivograd
2014-01-01
Ozone (O(3)) has been used successfully in advanced wastewater treatment in paper mills, other sectors and municipalities. To solve the water problems of regions lacking fresh water, wastewater treated by advanced oxidation processes (AOPs) can substitute fresh water in highly water-consuming industries. Results of this study have shown that paper strength properties are not impaired and whiteness is slightly impaired only when reusing paper mill wastewater. Furthermore, organic trace compounds are becoming an issue in the German paper industry. The results of this study have shown that AOPs are capable of improving wastewater quality by reducing organic load, colour and organic trace compounds.
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
78 FR 14122 - Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... Occupational Safety and Health Administration Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is... into consideration these newly corrected cross references. DATES: The effective date of the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Phonocardiographic diagnosis of aortic ball variance.
Hylen, J C; Kloster, F E; Herr, R H; Hull, P Q; Ames, A W; Starr, A; Griswold, H E
1968-07-01
Fatty infiltration causing changes in the silastic poppet of the Model 1000 series Starr-Edwards aortic valve prostheses (ball variance) has been detected with increasing frequency and can result in sudden death. Phonocardiograms were recorded on 12 patients with ball variance confirmed by operation and of 31 controls. Ten of the 12 patients with ball variance were distinguished from the controls by an aortic opening sound (AO) less than half as intense as the aortic closure sound (AC) at the second right intercostal space (AO/AC ratio less than 0.5). Both AO and AC were decreased in two patients with ball variance, with the loss of the characteristic high frequency and amplitude of these sounds. The only patient having a diminished AO/AC ratio (0.42) without ball variance at reoperation had a clot extending over the aortic valve struts. The phonocardiographic findings have been the most reliable objective evidence of ball variance in patients with Starr-Edwards aortic prosthesis of the Model 1000 series.
Sambandam, Sankar; Balakrishnan, Kalpana; Ghosh, Santu; Sadasivam, Arulselvan; Madhav, Satish; Ramasamy, Rengaraj; Samanta, Maitreya; Mukhopadhyay, Krishnendu; Rehman, Hafeez; Ramanathan, Veerabhadran
2015-03-01
Household air pollution from use of solid fuels is a major contributor to the national burden of disease in India. Currently available models of advanced combustion biomass cook-stoves (ACS) report significantly higher efficiencies and lower emissions in the laboratory when compared to traditional cook-stoves, but relatively little is known about household level exposure reductions, achieved under routine conditions of use. We report results from initial field assessments of six commercial ACS models from the states of Tamil Nadu and Uttar Pradesh in India. We monitored 72 households (divided into six arms to each receive an ACS model) for 24-h kitchen area concentrations of PM2.5 and CO before and (1-6 months) after installation of the new stove together with detailed information on fixed and time-varying household characteristics. Detailed surveys collected information on user perceptions regarding acceptability for routine use. While the median percent reductions in 24-h PM2.5 and CO concentrations ranged from 2 to 71% and 10-66%, respectively, concentrations consistently exceeded WHO air quality guideline values across all models raising questions regarding the health relevance of such reductions. Most models were perceived to be sub-optimally designed for routine use often resulting in inappropriate and inadequate levels of use. Household concentration reductions also run the risk of being compromised by high ambient backgrounds from community level solid-fuel use and contributions from surrounding fossil fuel sources. Results indicate that achieving health relevant exposure reductions in solid-fuel using households will require integration of emissions reductions with ease of use and adoption at community scale, in cook-stove technologies. Imminent efforts are also needed to accelerate the progress towards cleaner fuels.
NASA Astrophysics Data System (ADS)
Singh, R.; Mahajan, V.
2014-07-01
In the present work surface hardness investigations have been made on acrylonitrile butadiene styrene (ABS) pattern based investment castings after advancements in shell moulding for replication of biomedical implants. For the present study, a hip joint, made of ABS material, was fabricated as a master pattern by fused deposition modelling (FDM). After preparation of master pattern, mold was prepared by deposition of primary (1°), secondary (2°) and tertiary (3°) coatings with the addition of nylon fibre (1-2 cm in length of 1.5D). This study outlines the surface hardness mechanism for cast component prepared from ABS master pattern after advancement in shell moulding. The results of study highlight that during shell production, fibre modified shells have a much reduced drain time. Further the results are supported by cooling rate and micro structure analysis of casting.
Giebner, Sabrina; Ostermann, Sina; Straskraba, Susanne; Oetken, Matthias; Oehlmann, Jörg; Wagner, Martin
2016-09-06
Conventional wastewater treatment plants (WWTPs) have a limited capacity to eliminate micropollutants. One option to improve this is tertiary treatment. Accordingly, the WWTP Eriskirch at the German river Schussen has been upgraded with different combinations of ozonation, sand, and granulated activated carbon filtration. In this study, the removal of endocrine and genotoxic effects in vitro and reproductive toxicity in vivo was assessed in a 2-year long-term monitoring. All experiments were performed with aqueous and solid-phase extracted water samples. Untreated wastewater affected several endocrine endpoints in reporter gene assays. The conventional treatment removed the estrogenic and androgenic activity by 77 and 95 %, respectively. Nevertheless, high anti-estrogenic activities and reproductive toxicity persisted. All advanced treatment technologies further reduced the estrogenic activities by additional 69-86 % compared to conventional treatment, resulting in a complete removal of up to 97 %. In the Ames assay, we detected an ozone-induced mutagenicity, which was removed by subsequent filtration. This demonstrates that a post treatment to ozonation is needed to minimize toxic oxidative transformation products. In the reproduction test with the mudsnail Potamopyrgus antipodarum, a decreased number of embryos was observed for all wastewater samples. This indicates that reproductive toxicants were eliminated by neither the conventional nor the advanced treatment. Furthermore, aqueous samples showed higher anti-estrogenic and reproductive toxicity than extracted samples, indicating that the causative compounds are not extractable or were lost during extraction. This underlines the importance of the adequate handling of wastewater samples. Taken together, this study demonstrates that combinations of multiple advanced technologies reduce endocrine effects in vitro. However, they did not remove in vitro anti-estrogenicity and in vivo reproductive toxicity. This
Modality-Driven Classification and Visualization of Ensemble Variance
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Discrimination of frequency variance for tonal sequences.
Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A
2014-12-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) > σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.
Variance Decomposition Using an IRT Measurement Model
Glas, Cees A. W.; Boomsma, Dorret I.
2007-01-01
Large scale research projects in behaviour genetics and genetic epidemiology are often based on questionnaire or interview data. Typically, a number of items is presented to a number of subjects, the subjects’ sum scores on the items are computed, and the variance of sum scores is decomposed into a number of variance components. This paper discusses several disadvantages of the approach of analysing sum scores, such as the attenuation of correlations amongst sum scores due to their unreliability. It is shown that the framework of Item Response Theory (IRT) offers a solution to most of these problems. We argue that an IRT approach in combination with Markov chain Monte Carlo (MCMC) estimation provides a flexible and efficient framework for modelling behavioural phenotypes. Next, we use data simulation to illustrate the potentially huge bias in estimating variance components on the basis of sum scores. We then apply the IRT approach with an analysis of attention problems in young adult twins where the variance decomposition model is extended with an IRT measurement model. We show that when estimating an IRT measurement model and a variance decomposition model simultaneously, the estimate for the heritability of attention problems increases from 40% (based on sum scores) to 73%. PMID:17534709
Variance estimation for nucleotide substitution models.
Chen, Weishan; Wang, Hsiuying
2015-09-01
The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Evaluation of climate modeling factors impacting the variance of streamflow
NASA Astrophysics Data System (ADS)
Al Aamery, N.; Fox, J. F.; Snyder, M.
2016-11-01
The present contribution quantifies the relative importance of climate modeling factors and chosen response variables upon controlling the variance of streamflow forecasted with global climate model (GCM) projections, which has not been attempted in previous literature to our knowledge. We designed an experiment that varied climate modeling factors, including GCM type, project phase, emission scenario, downscaling method, and bias correction. The streamflow response variable was also varied and included forecasted streamflow and difference in forecast and hindcast streamflow predictions. GCM results and the Soil Water Assessment Tool (SWAT) were used to predict streamflow for a wet, temperate watershed in central Kentucky USA. After calibrating the streamflow model, 112 climate realizations were simulated within the streamflow model and then analyzed on a monthly basis using analysis of variance. Analysis of variance results indicate that the difference in forecast and hindcast streamflow predictions is a function of GCM type, climate model project phase, and downscaling approach. The prediction of forecasted streamflow is a function of GCM type, project phase, downscaling method, emission scenario, and bias correction method. The results indicate the relative importance of the five climate modeling factors when designing streamflow prediction ensembles and quantify the reduction in uncertainty associated with coupling the climate results with the hydrologic model when subtracting the hindcast simulations. Thereafter, analysis of streamflow prediction ensembles with different numbers of realizations show that use of all available realizations is unneeded for the study system, so long as the ensemble design is well balanced. After accounting for the factors controlling streamflow variance, results show that predicted average monthly change in streamflow tends to follow precipitation changes and result in a net increase in the average annual precipitation and
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
NASA Technical Reports Server (NTRS)
Low, John K. C.; Schweiger, Paul S.; Premo, John W.; Barber, Thomas J.; Saiyed, Naseem (Technical Monitor)
2000-01-01
NASA s model-scale nozzle noise tests show that it is possible to achieve a 3 EPNdB jet noise reduction with inwardfacing chevrons and flipper-tabs installed on the primary nozzle and fan nozzle chevrons. These chevrons and tabs are simple devices and are easy to be incorporated into existing short duct separate-flow nonmixed nozzle exhaust systems. However, these devices are expected to cause some small amount of thrust loss relative to the axisymmetric baseline nozzle system. Thus, it is important to have these devices further tested in a calibrated nozzle performance test facility to quantify the thrust performances of these devices. The choice of chevrons or tabs for jet noise suppression would most likely be based on the results of thrust loss performance tests to be conducted by Aero System Engineering (ASE) Inc. It is anticipated that the most promising concepts identified from this program will be validated in full scale engine tests at both Pratt & Whitney and Allied-Signal, under funding from NASA s Engine Validation of Noise Reduction Concepts (EVNRC) programs. This will bring the technology readiness level to the point where the jet noise suppression concepts could be incorporated with high confidence into either new or existing turbofan engines having short-duct, separate-flow nacelles.
Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; dos Santos, Luciana Urbano
2014-01-01
This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L−1 and a UV dose (λ = 254 nm) of 5,480 mJcm−2. The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number of G. duodenalis cysts in synthetic and surface water and was most effective in reducing the fluorescence of the cyst walls that were present in the surface water. The AOP showed a higher deleterious potential for G. duodenalis cysts than either peroxidation (H2O2) or photolysis (UV) processes alone. PMID:27379301
NASA Technical Reports Server (NTRS)
Brausch, J. F.; Motsinger, R. E.; Hoerst, D. J.
1986-01-01
Ten scale-model nozzles were tested in an anechoic free-jet facility to evaluate the acoustic characteristics of a mechanically suppressed inverted-velocity-profile coannular nozzle with an accoustically treated ejector system. The nozzle system used was developed from aerodynamic flow lines evolved in a previous contract, defined to incorporate the restraints imposed by the aerodynamic performance requirements of an Advanced Supersonic Technology/Variable Cycle Engine system through all its mission phases. Accoustic data of 188 test points were obtained, 87 under static and 101 under simulated flight conditions. The tests investigated variables of hardwall ejector application to a coannular nozzle with 20-chute outer annular suppressor, ejector axial positioning, treatment application to ejector and plug surfaces, and treatment design. Laser velocimeter, shadowgraph photograph, aerodynamic static pressure, and temperature measurement were acquired on select models to yield diagnositc information regarding the flow field and aerodynamic performance characteristics of the nozzles.
Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; Dos Santos, Luciana Urbano
2014-01-01
This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L(-1) and a UV dose (λ = 254 nm) of 5,480 mJcm(-2). The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number of G. duodenalis cysts in synthetic and surface water and was most effective in reducing the fluorescence of the cyst walls that were present in the surface water. The AOP showed a higher deleterious potential for G. duodenalis cysts than either peroxidation (H2O2) or photolysis (UV) processes alone.
A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance
NASA Technical Reports Server (NTRS)
Weiss, Marc A.; Greenhall, Charles A.
1996-01-01
An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.
O'Connor, Patrick; Rugani, Kelsey; West, Anna
2016-03-01
On behalf of the U.S. Department of Energy (DOE) Wind and Water Power Technology Office (WWPTO), Oak Ridge National Laboratory (ORNL), hosted a day and half long workshop on November 5 and 6, 2015 in the Washington, D.C. metro area to discuss cost reduction opportunities in the development of hydropower projects. The workshop had a further targeted focus on the costs of small, low-head1 facilities at both non-powered dams (NPDs) and along undeveloped stream reaches (also known as New Stream-Reach Development or “NSD”). Workshop participants included a cross-section of seasoned experts, including project owners and developers, engineering and construction experts, conventional and next-generation equipment manufacturers, and others to identify the most promising ways to reduce costs and achieve improvements for hydropower projects.
Testing Interaction Effects without Discarding Variance.
ERIC Educational Resources Information Center
Lopez, Kay A.
Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR...
Code of Federal Regulations, 2011 CFR
2011-04-01
... Dockets Management, except for information regarded as confidential under section 537(e) of the act. (d... Management (HFA-305), Food and Drug Administration, 5630 Fishers Lane, rm. 1061, Rockville, MD 20852. (1) The application for variance shall include the following information: (i) A description of the product and...
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…
Understanding gender variance in children and adolescents.
Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A
2014-06-01
Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document,...
Code of Federal Regulations, 2010 CFR
2010-04-01
... the study was conducted in compliance with the good laboratory practice regulations set forth in part... application for variance shall include the following information: (i) A description of the product and its... equipment, the proposed location of each unit. (viii) Such other information required by regulation or...
Parameterization of Incident and Infragravity Swash Variance
NASA Astrophysics Data System (ADS)
Stockdon, H. F.; Holman, R. A.; Sallenger, A. H.
2002-12-01
By clearly defining the forcing and morphologic controls of swash variance in both the incident and infragravity frequency bands, we are able to derive a more complete parameterization for extreme runup that may be applicable to a wide range of beach and wave conditions. It is expected that the dynamics of the incident and infragravity bands will have different dependencies on offshore wave conditions and local beach slopes. For example, previous studies have shown that swash variance in the incident band depends on foreshore beach slope while the infragravity variance depends more on a weighted mean slope across the surf zone. Because the physics of each band is parameterized differently, the amount that each frequency band contributes to the total swash variance will vary from site to site and, often, at a single site as the profile configuration changes over time. Using water level time series (measured at the shoreline) collected during nine dynamically different field experiments, we test the expected behavior of both incident and infragravity swash and the contribution each makes to total variance. At the dissipative sites (Iribarren number, \\xi0, <0.3) located in Oregon and the Netherlands, the incident band swash is saturated with respect to offshore wave height. Conversely, on the intermediate and reflective beaches, the amplitudes of both incident and infragravity swash variance grow with increasing offshore wave height. While infragravity band swash at all sites appears to increase linearly with offshore wave height, the magnitudes of the response are somewhat greater on reflective beaches than on dissipative beaches. This means that for the same offshore wave conditions the swash on a steeper foreshore will be larger than that on a more gently sloping foreshore. The potential control of the surf zone slope on infragravity band swash is examined at Duck, North Carolina, (0.3 < \\xi0 < 4.0), where significant differences in the relationship between swash
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
Dijk, Derk-Jan; Duffy, Jeanne F.; Silva, Edward J.; Shanahan, Theresa L.; Boivin, Diane B.; Czeisler, Charles A.
2012-01-01
Background The phase and amplitude of rhythms in physiology and behavior are generated by circadian oscillators and entrained to the 24-h day by exposure to the light-dark cycle and feedback from the sleep-wake cycle. The extent to which the phase and amplitude of multiple rhythms are similarly affected during altered timing of light exposure and the sleep-wake cycle has not been fully characterized. Methodology/Principal Findings We assessed the phase and amplitude of the rhythms of melatonin, core body temperature, cortisol, alertness, performance and sleep after a perturbation of entrainment by a gradual advance of the sleep-wake schedule (10 h in 5 days) and associated light-dark cycle in 14 healthy men. The light-dark cycle consisted either of moderate intensity ‘room’ light (∼90–150 lux) or moderate light supplemented with bright light (∼10,000 lux) for 5 to 8 hours following sleep. After the advance of the sleep-wake schedule in moderate light, no significant advance of the melatonin rhythm was observed whereas, after bright light supplementation the phase advance was 8.1 h (SEM 0.7 h). Individual differences in phase shifts correlated across variables. The amplitude of the melatonin rhythm assessed under constant conditions was reduced after moderate light by 54% (17–94%) and after bright light by 52% (range 12–84%), as compared to the amplitude at baseline in the presence of a sleep-wake cycle. Individual differences in amplitude reduction of the melatonin rhythm correlated with the amplitude of body temperature, cortisol and alertness. Conclusions/Significance Alterations in the timing of the sleep-wake cycle and associated bright or moderate light exposure can lead to changes in phase and reduction of circadian amplitude which are consistent across multiple variables but differ between individuals. These data have implications for our understanding of circadian organization and the negative health outcomes associated with shift-work, jet
NASA Astrophysics Data System (ADS)
Hasegawa, Ken R.
2000-12-01
MSMP and BAMM were commissioned by the Air Force Space Division (AFSD) in the late seventies to generate data in support of the Advanced Warning System (AWS), a development activity to replace the space-based surveillance satellites of the Defense Support Program (DSP). These programs were carried out by the Air Force Geophysics Laboratory with planning and mentoring by Irving Spiro of The Aerospace Corporation, acting on behalf of the program managers, 1st Lt. Todd Frantz, 1st Lt. Gordon Frantom, and 1st Lt. Ken Hasegawa of the technology program office at AFSD. The motivation of MSMP was the need for characterizing the exhaust plumes of the thrusters aboard post-boost vehicles, a primary target for the infrared sensors of the proposed AWS system. To that end, the experiments consisted of a series of Aries rocket launches from White Sands Missile Range in which dual payloads were carried aloft and separately deployed at altitudes above 100 km. One module contained an ensemble of sensors spanning the spectrum from the vacuum ultraviolet to the long wave infrared, all slaved to an rf tracker locked onto a beacon on the target module. The target was a small pressure-fed liquid-propellant rocket engine, a modified Atlas vernier, programmed for a series of maneuvers in the vicinity of the instrument module. As part of this program, diagnostic measurements of the target engine exhaust were made at Rocketdyne, and shock tube experiments on excitation processes were carried out by staff members of Calspan.
Herner, Jorn Dinh; Hu, Shaohua; Robertson, William H; Huai, Tao; Chang, M-C Oliver; Rieger, Paul; Ayala, Alberto
2011-03-15
Four heavy-duty and medium-duty diesel vehicles were tested in six different aftertreament configurations using a chassis dynamometer to characterize the occurrence of nucleation (the conversion of exhaust gases to particles upon dilution). The aftertreatment included four different diesel particulate filters and two selective catalytic reduction (SCR) devices. All DPFs reduced the emissions of solid particles by several orders of magnitude, but in certain cases the occurrence of a volatile nucleation mode could increase total particle number emissions. The occurrence of a nucleation mode could be predicted based on the level of catalyst in the aftertreatment, the prevailing temperature in the aftertreatment, and the age of the aftertreatment. The particles measured during nucleation had a high fraction of sulfate, up to 62% of reconstructed mass. Additionally the catalyst reduced the toxicity measured in chemical and cellular assays suggesting a pathway for an inverse correlation between particle number and toxicity. The results have implications for exposure to and toxicity of diesel PM.
Not Available
1991-01-01
ABB CE's Low NOx Bulk Furnace Staging (LNBFS) System and Low NOx Concentric Firing System (LNCFS) are demonstrated in stepwise fashion. These systems incorporate the concept of advanced overfire air (AOFA), clustered coal nozzles, and offset air. A complete description of the installed technologies is provided in the following section. The primary objective of the Plant Lansing Smith demonstration is to determine the long-term effects of commercially available tangentially-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology are also being performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications has been established for the project.
Not Available
1992-01-01
The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO[sub x] burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO[sub x] reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulatecharacteristics and boiler efficiency. Baseline, AOFA, and LNB without AOFA test segments have been completed. Analysis of the 94 days of LNB long-term data collected show the full-load NO[sub x] emission levels to be approximately 0.65 lb/MBtu. Flyash LOI values for the LNB configuration are approximately 8 percent at full-load. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. Abbreviated diagnostic tests for the LNB+AOFA configuration indicate that at 500 MWe, NO[sub x] emissions are approximately 0.55 lb/MBtu with corresponding flyash LOI values of approximately 11 percent. For comparison, the long-term full-load, baseline NO[sub x] emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing of the LNB+AOFA configuration will be performed when the stackparticulate emissions issue is resolved. Testing of a process optimization package on Plant Hammond Unit 4 was performed during this quarter. The software was configured to minimize NO[sub x] emissions using total combustion air flow and advanced overfire air distribution as the controlled parameters. Preliminary results from this testing indicate that this package shows promise in reducing NO[sub x] emissions while maintaining or improving other boiler performance parameters.
NiCo2O4/N-doped graphene as an advanced electrocatalyst for oxygen reduction reaction
NASA Astrophysics Data System (ADS)
Zhang, Hui; Li, Huiyong; Wang, Haiyan; He, Kejian; Wang, Shuangyin; Tang, Yougen; Chen, Jiajie
2015-04-01
Developing low-cost catalyst for high-performance oxygen reduction reaction (ORR) is highly desirable. Herein, NiCo2O4/N-doped reduced graphene oxide (NiCo2O4/N-rGO) hybrid is proposed as a high-performance catalyst for ORR for the first time. The well-formed NiCo2O4/N-rGO hybrid is studied by cyclic voltammetry (CV) curves and linear-sweep voltammetry (LSV) performed on the rotating-ring-disk-electrode (RDE) in comparison with N-rGO-free NiCo2O4 and the bare N-rGO. Due to the synergistic effect, the NiCo2O4/N-rGO hybrid exhibits significant improvement of catalytic performance with an onset potential of -0.12 V, which mainly favors a direct four electron pathway in ORR process, close to the behavior of commercial carbon-supported Pt. Also, the benefits of N-incorporation are investigated by comparing NiCo2O4/N-rGO with NiCo2O4/rGO, where higher cathodic currents, much more positive half-wave potential and more electron transfer numbers are observed for the N-doping one, which should be ascribed to the new highly efficient active sites created by N incorporation into graphene. The NiCo2O4/N-rGO hybrid could be used as a promising catalyst for high power metal/air battery.
Morishima, Chihiro; Shiffman, Mitchell L.; Dienstag, Jules L.; Lindsay, Karen L; Szabo, Gyongyi; Everson, Gregory T.; Lok, Anna S.; Di Bisceglie, Adrian M.; Ghany, Marc G.; Naishadham, Deepa; Morgan, Timothy R.; Wright, Elizabeth C.
2013-01-01
Objective During the Hepatitis C Antiviral Long-term Treatment against Cirrhosis Trial, 3.5 years of maintenance peginterferon-alfa-2a therapy did not affect liver fibrosis progression or clinical outcomes among 1,050 prior interferon nonresponders with advanced fibrosis or cirrhosis. We investigated whether reduced hepatic inflammation was associated with clinical benefit in 834 patients with a baseline and follow-up biopsy 1.5 years after randomization to peginterferon or observation. Methods Relationships between change in hepatic inflammation (Ishak HAI) and serum ALT, fibrosis progression and clinical outcomes after randomization, and HCV RNA decline before and after randomization were evaluated. Histologic change was defined as a ≥2-point difference in HAI or Ishak fibrosis score between biopsies. Results Among 657 patients who received full-dose peginterferon/ribavirin “lead-in” therapy before randomization, year-1.5 HAI improvement was associated with lead-in HCV RNA suppression in both randomized treated (P <0.0001) and control (P = 0.0001) groups, even in the presence of recurrent viremia. This relationship persisted at year 3.5 in both treated (P = 0.001) and control (P = 0.01) groups. Among 834 patients followed for a median of 6 years, fewer clinical outcomes occurred in patients with improved HAI at year 1.5 compared to those without such improvement in both treated (P = 0.03) and control (P = 0.05) groups. Among patients with Ishak 3–4 fibrosis at baseline, those with improved HAI at year 1.5 had less fibrosis progression at year 1.5 in both treated (P = 0.0003) and control (P = 0.02) groups. Conclusion Reduced hepatic inflammation (measured 1.5 and 3.5 years after randomization) was associated with profound virological suppression during lead-in treatment with full-dose peginterferon/ribavirin and with decreased fibrosis progression and clinical outcomes, independent of randomized treatment. PMID:22688849
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.
Analysis of Variance of Multiply Imputed Data.
van Ginkel, Joost R; Kroonenberg, Pieter M
2014-01-01
As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Directional variance analysis of annual rings
NASA Astrophysics Data System (ADS)
Kumpulainen, P.; Marjanen, K.
2010-07-01
The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.
Variance and skewness in the FIRST survey
NASA Astrophysics Data System (ADS)
Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.
1998-10-01
We investigate the large-scale clustering of radio sources in the FIRST 1.4-GHz survey by analysing the distribution function (counts in cells). We select a reliable sample from the the FIRST catalogue, paying particular attention to the problem of how to define single radio sources from the multiple components listed. We also consider the incompleteness of the catalogue. We estimate the angular two-point correlation function w(theta), the variance Psi_2 and skewness Psi_3 of the distribution for the various subsamples chosen on different criteria. Both w(theta) and Psi_2 show power-law behaviour with an amplitude corresponding to a spatial correlation length of r_0~10h^-1Mpc. We detect significant skewness in the distribution, the first such detection in radio surveys. This skewness is found to be related to the variance through Psi_3=S_3(Psi_2)^alpha, with alpha=1.9+/-0.1, consistent with the non-linear gravitational growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of variance and the skewness are consistent with realistic models of galaxy clustering.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans
NASA Astrophysics Data System (ADS)
Raju, C.; Vidya, R.
2016-06-01
In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.
1995-09-01
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NOx combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NOx reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NOx burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NOx reductions of each technology and evaluate the effects of those reductions on other combustion parameters. Results are described.
Visual SLAM Using Variance Grid Maps
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
The defect variance of random spherical harmonics
NASA Astrophysics Data System (ADS)
Marinucci, Domenico; Wigman, Igor
2011-09-01
The defect of a function f:M\\rightarrow {R} is defined as the difference between the measure of the positive and negative regions. In this paper, we begin the analysis of the distribution of defect of random Gaussian spherical harmonics. By an easy argument, the defect is non-trivial only for even degree and the expected value always vanishes. Our principal result is evaluating the defect variance, asymptotically in the high-frequency limit. As other geometric functionals of random eigenfunctions, the defect may be used as a tool to probe the statistical properties of spherical random fields, a topic of great interest for modern cosmological data analysis.
Thomas, Reju George; Moon, Myeong Ju; Kim, Jo Heon; Lee, Jae Hyuk; Jeong, Yong Yeon
2015-01-01
Advanced hepatic fibrosis therapy using drug-delivering nanoparticles is a relatively unexplored area. Angiotensin type 1 (AT1) receptor blockers such as losartan can be delivered to hepatic stellate cells (HSC), blocking their activation and thereby reducing fibrosis progression in the liver. In our study, we analyzed the possibility of utilizing drug-loaded vehicles such as hyaluronic acid (HA) micelles carrying losartan to attenuate HSC activation. Losartan, which exhibits inherent lipophilicity, was loaded into the hydrophobic core of HA micelles with a 19.5% drug loading efficiency. An advanced liver fibrosis model was developed using C3H/HeN mice subjected to 20 weeks of prolonged TAA/ethanol weight-adapted treatment. The cytocompatibility and cell uptake profile of losartan-HA micelles were studied in murine fibroblast cells (NIH3T3), human hepatic stellate cells (hHSC) and FL83B cells (hepatocyte cell line). The ability of these nanoparticles to attenuate HSC activation was studied in activated HSC cells based on alpha smooth muscle actin (α-sma) expression. Mice treated with oral losartan or losartan-HA micelles were analyzed for serum enzyme levels (ALT/AST, CK and LDH) and collagen deposition (hydroxyproline levels) in the liver. The accumulation of HA micelles was observed in fibrotic livers, which suggests increased delivery of losartan compared to normal livers and specific uptake by HSC. Active reduction of α-sma was observed in hHSC and the liver sections of losartan-HA micelle-treated mice. The serum enzyme levels and collagen deposition of losartan-HA micelle-treated mice was reduced significantly compared to the oral losartan group. Losartan-HA micelles demonstrated significant attenuation of hepatic fibrosis via an HSC-targeting mechanism in our in vitro and in vivo studies. These nanoparticles can be considered as an alternative therapy for liver fibrosis.
The influence of local spring temperature variance on temperature sensitivity of spring phenology.
Wang, Tao; Ottlé, Catherine; Peng, Shushi; Janssens, Ivan A; Lin, Xin; Poulter, Benjamin; Yue, Chao; Ciais, Philippe
2014-05-01
The impact of climate warming on the advancement of plant spring phenology has been heavily investigated over the last decade and there exists great variability among plants in their phenological sensitivity to temperature. However, few studies have explicitly linked phenological sensitivity to local climate variance. Here, we set out to test the hypothesis that the strength of phenological sensitivity declines with increased local spring temperature variance, by synthesizing results across ground observations. We assemble ground-based long-term (20-50 years) spring phenology database (PEP725 database) and the corresponding climate dataset. We find a prevalent decline in the strength of phenological sensitivity with increasing local spring temperature variance at the species level from ground observations. It suggests that plants might be less likely to track climatic warming at locations with larger local spring temperature variance. This might be related to the possibility that the frost risk could be higher in a larger local spring temperature variance and plants adapt to avoid this risk by relying more on other cues (e.g., high chill requirements, photoperiod) for spring phenology, thus suppressing phenological responses to spring warming. This study illuminates that local spring temperature variance is an understudied source in the study of phenological sensitivity and highlight the necessity of incorporating this factor to improve the predictability of plant responses to anthropogenic climate change in future studies.
River meanders - Theory of minimum variance
Langbein, Walter Basil; Leopold, Luna Bergere
1966-01-01
Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.
Variance and Skewness in the FIRST Survey
NASA Astrophysics Data System (ADS)
Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.
We investigate the large-scale clustering of radio sources by analysing the distribution function of the FIRST 1.4 GHz survey. We select a reliable galaxy sample from the FIRST catalogue, paying particular attention to the definition of single radio sources from the multiple components listed in the FIRST catalogue. We estimate the variance, Ψ2, and skewness, Ψ3, of the distribution function for the best galaxy subsample. Ψ2 shows power-law behaviour as a function of cell size, with an amplitude corresponding a spatial correlation length of r0 ~10 h-1 Mpc. We detect significant skewness in the distribution, and find that it is related to the variance through the relation Ψ3 = S3 (Ψ2)α with α = 1.9 +/- 0.1 consistent with the non-linear growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of clustering (corresponding to a spatial correlation length of r0 ~10 h-1 Mpc) and skewness are consistent with realistic models of galaxy clustering.
Multivariate Granger causality and generalized variance
NASA Astrophysics Data System (ADS)
Barrett, Adam B.; Barnett, Lionel; Seth, Anil K.
2010-04-01
Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or “ensembles” of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke’s seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define “partial” Granger causality in the multivariate context and we also motivate reformulations of “causal density” and “Granger autonomy.” Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.
Multivariate Granger causality and generalized variance.
Barrett, Adam B; Barnett, Lionel; Seth, Anil K
2010-04-01
Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or "ensembles" of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke's seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define "partial" Granger causality in the multivariate context and we also motivate reformulations of "causal density" and "Granger autonomy." Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.
NASA Technical Reports Server (NTRS)
Wong, Kin C.
2003-01-01
This paper documents the derivation of the data reduction equations for the calibration of the six-component thrust stand located in the CE-22 Advanced Nozzle Test Facility. The purpose of the calibration is to determine the first-order interactions between the axial, lateral, and vertical load cells (second-order interactions are assumed to be negligible). In an ideal system, the measurements made by the thrust stand along the three coordinate axes should be independent. For example, when a test article applies an axial force on the thrust stand, the axial load cells should measure the full magnitude of the force, while the off-axis load cells (lateral and vertical) should read zero. Likewise, if a lateral force is applied, the lateral load cells should measure the entire force, while the axial and vertical load cells should read zero. However, in real-world systems, there may be interactions between the load cells. Through proper design of the thrust stand, these interactions can be minimized, but are hard to eliminate entirely. Therefore, the purpose of the thrust stand calibration is to account for these interactions, so that necessary corrections can be made during testing. These corrections can be expressed in the form of an interaction matrix, and this paper shows the derivation of the equations used to obtain the coefficients in this matrix.
Williams, Hants; Simmons, Leigh Ann; Tanabe, Paula
2015-09-01
The aim of this article is to discuss how advanced practice nurses (APNs) can incorporate mindfulness-based stress reduction (MBSR) as a nonpharmacologic clinical tool in their practice. Over the last 30 years, patients and providers have increasingly used complementary and holistic therapies for the nonpharmacologic management of acute and chronic diseases. Mindfulness-based interventions, specifically MBSR, have been tested and applied within a variety of patient populations. There is strong evidence to support that the use of MBSR can improve a range of biological and psychological outcomes in a variety of medical illnesses, including acute and chronic pain, hypertension, and disease prevention. This article will review the many ways APNs can incorporate MBSR approaches for health promotion and disease/symptom management into their practice. We conclude with a discussion of how nurses can obtain training and certification in MBSR. Given the significant and growing literature supporting the use of MBSR in the prevention and treatment of chronic disease, increased attention on how APNs can incorporate MBSR into clinical practice is necessary.
Yan, Peng; Guo, Jin-Song; Wang, Jing; Chen, You-Peng; Ji, Fang-Ying; Dong, Yang; Zhang, Hong; Ouyang, Wen-juan
2015-05-01
An advanced wastewater treatment process (SIPER) was developed to simultaneously decrease sludge production, prevent the accumulation of inorganic solids, recover phosphorus, and enhance nutrient removal. The feasibility of simultaneous enhanced nutrient removal along with sludge reduction as well as the potential for enhanced nutrient removal via this process were further evaluated. The results showed that the denitrification potential of the supernatant of alkaline-treated sludge was higher than that of the influent. The system COD and VFA were increased by 23.0% and 68.2%, respectively, after the return of alkaline-treated sludge as an internal C-source, and the internal C-source contributed 24.1% of the total C-source. A total of 74.5% of phosphorus from wastewater was recovered as a usable chemical crystalline product. The nitrogen and phosphorus removal were improved by 19.6% and 23.6%, respectively, after incorporation of the side-stream system. Sludge minimization and excellent nutrient removal were successfully coupled in the SIPER process.
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...
Applications of Variance Fractal Dimension: a Survey
NASA Astrophysics Data System (ADS)
Phinyomark, Angkoon; Phukpattaranont, Pornchai; Limsakul, Chusak
2012-04-01
Chaotic dynamical systems are pervasive in nature and can be shown to be deterministic through fractal analysis. There are numerous methods that can be used to estimate the fractal dimension. Among the usual fractal estimation methods, variance fractal dimension (VFD) is one of the most significant fractal analysis methods that can be implemented for real-time systems. The basic concept and theory of VFD are presented. Recent research and the development of several applications based on VFD are reviewed and explained in detail, such as biomedical signal processing and pattern recognition, speech communication, geophysical signal analysis, power systems and communication systems. The important parameters that need to be considered in computing the VFD are discussed, including the window size and the window increment of the feature, and the step size of the VFD. Directions for future research of VFD are also briefly outlined.
Considering Oil Production Variance as an Indicator of Peak Production
2010-06-07
Acquisition Cost ( IRAC ) Oil Prices. Source: Data used to construct graph acquired from the EIA (http://tonto.eia.doe.gov/country/timeline/oil_chronology.cfm...Acquisition Cost ( IRAC ). Production vs. Price – Variance Comparison Oil production variance and oil price variance have never been so far
A New Nonparametric Levene Test for Equal Variances
ERIC Educational Resources Information Center
Nordstokke, David W.; Zumbo, Bruno D.
2010-01-01
Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…
Zhang, Kai; Dong, Haifeng; Dai, Wenhao; Meng, Xiangdan; Lu, Huiting; Wu, Tingting; Zhang, Xueji
2017-01-03
Herein, an efficient electrochemical tracer with advanced oxygen reduction reaction (ORR) performance was designed by controllably decorating platinum (Pt) (diameter, 1 nm) on the surface of compositionally tunable tin-doped indium oxide nanoparticle (Sn-In2O3) (diameter, 25 nm), and using the Pt/Sn-In2O3 as electrochemical tracer and interfacial term hairpin capture probe, a facile and ultrasensitive microRNA (miRNA) detection strategy was developed. The morphology and composition of the generated Pt/Sn-In2O3 NPs were comprehensively characterized by spectroscopic and microscopic measurements, indicating numerous Pt uniformly anchored on the surface of Sn-In2O3. The interaction between Pt and surface Sn as well as high Pt(111) exposure resulted in the excellent electrochemical catalytic ability and stability of the Pt/Sn-In2O3 ORR. As proof-of-principle, using streptavidin (SA) functionalized Pt/Sn-In2O3 (SA/Pt/Sn-In2O3) as electrochemical tracer to amplify the detectable signal and a interfacial term hairpin probe for target capture probe, a miRNA biosensor with a linear range from 5 pM to 0.5 fM and limit of detection (LOD) down to 1.92 fM was developed. Meanwhile, the inherent selectivity of the term hairpin capture probe endowed the biosensor with good base discrimination ability. The good feasibility for real sample detection was also demonstrated. The work paves a new avenue to fabricate and design high-effective electrocatalytic tracer, which have great promise in new bioanalytical applications.
Cyclostationary analysis with logarithmic variance stabilisation
NASA Astrophysics Data System (ADS)
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
Correcting an analysis of variance for clustering.
Hedges, Larry V; Rhoads, Christopher H
2011-02-01
A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.
Measuring past changes in ENSO variance using Mg/Ca measurements on individual planktic foraminifera
NASA Astrophysics Data System (ADS)
Marchitto, T. M.; Grist, H. R.; van Geen, A.
2013-12-01
Previous work in Soledad Basin, located off Baja California Sur in the eastern subtropical Pacific, supports a La Niña-like mean-state response to enhanced radiative forcing at both orbital and millennial (solar) timescales during the Holocene. Mg/Ca measurements on the planktic foraminifer Globigerina bulloides indicate cooling when insolation is higher, consistent with an ';ocean dynamical thermostat' response that shoals the thermocline and cools the surface in the eastern tropical Pacific. Some, but not all, numerical models simulate reduced ENSO variance (less frequent and/or less intense events) when the Pacific is driven into a La Niña-like mean state by radiative forcing. Hypothetically the question of ENSO variance can be examined by measuring individual planktic foraminiferal tests from within a sample interval. Koutavas et al. (2006) used d18O on single specimens of Globigerinoides ruber from the eastern equatorial Pacific to demonstrate a 50% reduction in variance at ~6 ka compared to ~2 ka, consistent with the sense of the model predictions at the orbital scale. Here we adapt this approach to Mg/Ca and apply it to the millennial-scale question. We present Mg/Ca measured on single specimens of G. bulloides (cold season) and G. ruber (warm season) from three time slices in Soledad Basin: the 20th century, the warm interval (and solar low) at 9.3 ka, and the cold interval (and solar high) at 9.8 ka. Each interval is uniformly sampled over a ~100-yr (~10-cm or more) window to ensure that our variance estimate is not biased by decadal-scale stochastic variability. Theoretically we can distinguish between changing ENSO variability and changing seasonality: a reduction in ENSO variance would result in narrowing of both the G. bulloides and G. ruber temperature distributions without necessarily changing the distance between their two medians; while a reduction in seasonality would cause the two species' distributions to move closer together.
2001-10-25
ADVANCED BREAST CANCER John R. Keyserlingk1, Mariam Yassa1 Paul Ahlgren1 and Normand Belliveau1 Ville Marie Oncology Center; St. Mary’s Hospital...Montreal, Canada Abstract: 20 successive patients who received preoperative chemohormonotherapy (PCT) for locally advanced breast cancer underwent high...INTRODUCTION Approximately 10% of our current breast cancer patients present with sufficient tumor load to be classified as having locally advanced breast
Smith, L.L.; Hooper, M.P.
1992-07-13
This Phase 2 Test Report summarizes the testing activities and results for the second testing phase of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The second phase demonstrates the Advanced Overfire Air (AOFA) retrofit with existing Foster Wheeler (FWEC) burners. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data supported by short-term characterization data. Ultimately a fifty percent NO{sub x} reduction target using combinations of combustion modifications has been established for this project.
Smith, L.L.; Hooper, M.P. )
1992-07-13
This Phase 2 Test Report summarizes the testing activities and results for the second testing phase of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO[sub x]) emissions from coal-fired boilers. The second phase demonstrates the Advanced Overfire Air (AOFA) retrofit with existing Foster Wheeler (FWEC) burners. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO[sub x] combustion equipment through the collection and analysis of long-term emissions data supported by short-term characterization data. Ultimately a fifty percent NO[sub x] reduction target using combinations of combustion modifications has been established for this project.
Jeong, Joonseon; Jung, Jinyoung; Cooper, William J; Song, Weihua
2010-08-01
The presence of iodinated X-ray contrast media compounds (ICM) in surface and ground waters has been reported. This is likely due to their biological inertness and incomplete removal in wastewater treatment processes. The present study reports partial degradation mechanisms based on elucidating the structures of major reaction by-products using gamma-irradiation and LC-MS. Studies conducted at concentrations higher than observed in natural waters is necessary to elucidate the reaction by-product structures and to develop destruction mechanisms. To support these mechanistic studies, the bimolecular rate constants for the reaction of OH and e(-)(aq) with one ionic ICM (diatrizoate), four non-ionic ICM (iohexol, iopromide, iopamidol, and iomeprol), and the several analogues of diatrizoate were determined. The absolute bimolecular reaction rate constants for diatrizoate, iohexol, iopromide, iopamidol, and iomeprol with OH were (9.58 +/- 0.23)x10(8), (3.20 +/- 0.13)x10(9), (3.34 +/- 0.14)x10(9), (3.42 +/- 0.28)x10(9), and (2.03 +/- 0.13) x 10(9) M(-1) s(-1), and with e(-)(aq) were (2.13 +/- 0.03)x10(10), (3.35 +/- 0.03)x10(10), (3.25 +/- 0.05)x10(10), (3.37 +/- 0.05)x10(10), and (3.47 +/- 0.02) x 10(10) M(-1) s(-1), respectively. Transient spectra for the intermediates formed by the reaction of OH were also measured over the time period of 1-100 micros to better understand the stability of the radicals and for evaluation of reaction rate constants. Degradation efficiencies for the OH and e(-)(aq) reactions with the five ICM were determined using steady-state gamma-radiolysis. Collectively, these data will form the basis of kinetic models for application of advanced oxidation/reduction processes for treating water containing these compounds.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Prediction of membrane protein types using maximum variance projection
NASA Astrophysics Data System (ADS)
Wang, Tong; Yang, Jie
2011-05-01
Predicting membrane protein types has a positive influence on further biological function analysis. To quickly and efficiently annotate the type of an uncharacterized membrane protein is a challenge. In this work, a system based on maximum variance projection (MVP) is proposed to improve the prediction performance of membrane protein types. The feature extraction step is based on a hybridization representation approach by fusing Position-Specific Score Matrix composition. The protein sequences are quantized in a high-dimensional space using this representation strategy. Some problems will be brought when analysing these high-dimensional feature vectors such as high computing time and high classifier complexity. To solve this issue, MVP, a novel dimensionality reduction algorithm is introduced by extracting the essential features from the high-dimensional feature space. Then, a K-nearest neighbour classifier is employed to identify the types of membrane proteins based on their reduced low-dimensional features. As a result, the jackknife and independent dataset test success rates of this model reach 86.1 and 88.4%, respectively, and suggest that the proposed approach is very promising for predicting membrane proteins types.
A fast minimum variance beamforming method using principal component analysis.
Kim, Kyuhong; Park, Suhyun; Kim, Jungho; Park, Sung-Bae; Bae, MooHo
2014-06-01
Minimum variance (MV) beamforming has been studied for improving the performance of a diagnostic ultrasound imaging system. However, it is not easy for the MV beamforming to be implemented in a real-time ultrasound imaging system because of the enormous amount of computation time associated with the covariance matrix inversion. In this paper, to address this problem, we propose a new fast MV beamforming method that almost optimally approximates the MV beamforming while reducing the computational complexity greatly through dimensionality reduction using principal component analysis (PCA). The principal components are estimated offline from pre-calculated conventional MV weights. Thus, the proposed method does not directly calculate the MV weights but approximates them by a linear combination of a few selected dominant principal components. The combinational weights are calculated in almost the same way as in MV beamforming, but in the transformed domain of beamformer input signal by the PCA, where the dimension of the transformed covariance matrix is identical to the number of some selected principal component vectors. Both computer simulation and experiment were carried out to verify the effectiveness of the proposed method with echo signals from simulation as well as phantom and in vivo experiments. It is confirmed that our method can reduce the dimension of the covariance matrix down to as low as 2 × 2 while maintaining the good image quality of MV beamforming.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan; Dermoune, Azzouz
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
RR-Interval variance of electrocardiogram for atrial fibrillation detection
NASA Astrophysics Data System (ADS)
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
1996-07-01
This Public Design Report presents the design criteria of a DOE Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of NO{sub x} emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 (500 MW) near Rome, Georgia. The technologies being demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NO{sub x} burner. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NO{sub x} burners, advanced overfire systems, and digital control system.
Not Available
1992-02-03
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an Advanced Overfire Air (AOFA) system followed by Low NO{sub x} Burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.
Not Available
1992-02-03
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an Advanced Overfire Air (AOFA) system followed by Low NO{sub x} Burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.
Not Available
1992-08-24
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (No[sub x]) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO[sub x] combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO[sub x] reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO[sub x] burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO[sub x] reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.
Not Available
1992-08-24
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (No{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.
Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System
NASA Astrophysics Data System (ADS)
Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.
2016-06-01
Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation
Fan, Xuetong
2000-08-20
Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
2014-06-15
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
76 FR 78698 - Proposed Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-19
... Occupational Safety and Health Administration Proposed Revocation of Permanent Variances AGENCY: Occupational... short and plain statement detailing (1) how the proposed revocation would affect the requesting party..., subpart L. The following table provides information about the variances proposed for revocation by...
Gender Variance and Educational Psychology: Implications for Practice
ERIC Educational Resources Information Center
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
42 CFR 456.522 - Content of request for variance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time..., mental hospital, and ICF located within a 50-mile radius of the facility; (e) The distance and...
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity
Diaz, S Anaid; Viney, Mark
2014-01-01
Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species. PMID:25360248
Conceptual Complexity and the Bias/Variance Tradeoff
ERIC Educational Resources Information Center
Briscoe, Erica; Feldman, Jacob
2011-01-01
In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…
Variances and Covariances of Kendall's Tau and Their Estimation.
ERIC Educational Resources Information Center
Cliff, Norman; Charlin, Ventura
1991-01-01
Variance formulas of H. E. Daniels and M. G. Kendall (1947) are generalized to allow for the presence of ties and variance of the sample tau correlation. Applications of these generalized formulas are discussed and illustrated using data from a 1965 study of contraceptive use in 15 developing countries. (SLD)
2008-09-01
Sulphur Fuel ( HSF ) is a potential problem to NATO forces when vehicles and equipment are fitted with advanced emission reduction devices that require Low...worldwide available, standard fuel (F-34) and equipment capable of using such high sulphur fuels ( HSF ). Recommendations • Future equipment fitted with...will all be affected when using HSF . However, actions can be taken to overcome problems by by-passing these systems, modifying the ECU or a
Code of Federal Regulations, 2011 CFR
2011-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Greenfield, Victoria A; Paoli, Letizia
2012-01-01
Critics of the international drug-control regime contend that supply-oriented policy interventions are not just ineffective, but, in focusing almost exclusively on supply reduction, they also produce unintended adverse consequences. Evidence from the world heroin market supports their claims. The balance of the effects of policy is yet unknown, but the prospect of adverse consequences underlies a central paradox of contemporary supply-oriented policy. In this paper, we evaluate whether harm reduction, a subject of intense debate in the demand-oriented drug-policy community, can provide a unifying foundation for supply-oriented drug policy and speak more directly to policy goals. Our analysis rests on an extensive review of the literature on harm reduction and draws insight from other policy communities' disciplines and methods. First, we explore the paradoxes of supply-oriented policy that initially motivated our interest in harm reduction; second, we consider the conceptual and technical challenges that have contributed to the debate on harm reduction and assess their relevance to a supply-oriented application; third, we examine responses to those challenges, i.e., various tools (taxonomies, models, and measurement strategies), that can be used to identify, categorize, and assess harms. Despite substantial conceptual and technical challenges, we find that harm reduction can provide a basis for assessing the net consequences of supply-oriented drug policy, choosing more rigorously amongst policy options, and identifying new options. In addition, we outline a practical path forward for assessing harms and policy options. On the basis of our analysis, we suggest pursuing a harm-based approach and making a clearer distinction between supply-oriented and supply-reduction policy.
Estimation of Model Error Variances During Data Assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick
2003-01-01
Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data
NASA Technical Reports Server (NTRS)
May, Todd A.
2011-01-01
SLS is a national capability that empowers entirely new exploration for missions of national importance. Program key tenets are safety, affordability, and sustainability. SLS builds on a solid foundation of experience and current capacities to enable a timely initial capability and evolve to a flexible heavy-lift capability through competitive opportunities: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability and performance The road ahead promises to be an exciting journey for present and future generations, and we look forward to working with you to continue America fs space exploration.
NASA Technical Reports Server (NTRS)
Sinha, Neeraj
2014-01-01
This Phase II project validated a state-of-the-art LES model, coupled with a Ffowcs Williams-Hawkings (FW-H) far-field acoustic solver, to support the development of advanced engine concepts. These concepts include innovative flow control strategies to attenuate jet noise emissions. The end-to-end LES/ FW-H noise prediction model was demonstrated and validated by applying it to rectangular nozzle designs with a high aspect ratio. The model also was validated against acoustic and flow-field data from a realistic jet-pylon experiment, thereby significantly advancing the state of the art for LES.
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities".
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Friede, Tim; Kieser, Meinhard
2013-01-01
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re-estimation procedures have been proposed in the literature. We compare the blinded sample size re-estimation procedures based on the one-sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re-estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re-estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one-sample estimator and in turn the sample size resulting from the related re-estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one-sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application.
Not Available
1992-12-31
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x } reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. Baseline, AOFA, and LNB without AOFA test segments have been completed. Analysis of the 94 days of LNB tong-term data collected show the full load NO{sub x} emission levels to be approximately 0.65 lb/MBtu. Flyash LOI values for the LNB configuration are approximately 8 percent at full load. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. Abbreviated diagnostic tests for the LNB+AOFA configuration indicate that at 500 MWe, NO{sub x} emissions are approximately 0.55 lb/MBtu with corresponding flyash LOI values of approximately 11 percent. For comparison, the long-term, full load, baseline NO{sub x} emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing of the LNB+AOFA configuration will be performed when the stack particulate emissions issue is resolved.
Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach
NASA Astrophysics Data System (ADS)
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex
2016-06-01
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to seven selected model parameters using a modified volatility basis-set (VBS) approach: four involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semivolatile and intermediate volatility organics (SIVOCs), and NOx; two involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the model parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether or not SOA that starts as semivolatile is rapidly transformed to nonvolatile SOA by particle-phase processes such as oligomerization and/or accretion, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into two subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to nonvolatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. However
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA
Technology Transfer Automated Retrieval System (TEKTRAN)
Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...
Hidden item variance in multiple mini-interview scores.
Zaidi, Nikki L Bibler; Swoboda, Christopher M; Kelcey, Benjamin M; Manuel, R Stephen
2017-05-01
The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation form. Due to its multi-faceted, repeated measures format, reliability for the MMI has been primarily evaluated using generalizability (G) theory. A key assumption of G theory is that G studies model the most important sources of variance to which a researcher plans to generalize. Because G studies can only attribute variance to the facets that are modeled in a G study, failure to model potentially substantial sources of variation in MMI scores can result in biased estimates of variance components. This study demonstrates the implications of hiding the item facet in MMI studies when true item-level effects exist. An extensive Monte Carlo simulation study was conducted to examine whether a commonly used hidden item, person-by-station (p × s|i) G study design results in biased estimated variance components. Estimates from this hidden item model were compared with estimates from a more complete person-by-station-by-item (p × s × i) model. Results suggest that when true item-level effects exist, the hidden item model (p × s|i) will result in biased variance components which can bias reliability estimates; therefore, researchers should consider using the more complete person-by-station-by-item model (p × s × i) when evaluating generalizability of MMI scores.
Allan variance of time series models for measurement data
NASA Astrophysics Data System (ADS)
Zhang, Nien Fan
2008-10-01
The uncertainty of the mean of autocorrelated measurements from a stationary process has been discussed in the literature. However, when the measurements are from a non-stationary process, how to assess their uncertainty remains unresolved. Allan variance or two-sample variance has been used in time and frequency metrology for more than three decades as a substitute for the classical variance to characterize the stability of clocks or frequency standards when the underlying process is a 1/f noise process. However, its applications are related only to the noise models characterized by the power law of the spectral density. In this paper, from the viewpoint of the time domain, we provide a statistical underpinning of the Allan variance for discrete stationary processes, random walk and long-memory processes such as the fractional difference processes including the noise models usually considered in time and frequency metrology. Results show that the Allan variance is a better measure of the process variation than the classical variance of the random walk and the non-stationary fractional difference processes including the 1/f noise.
Variance estimation in the analysis of microarray data.
Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
The Evolution and Consequences of Sex-Specific Reproductive Variance
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130
NASA Astrophysics Data System (ADS)
Pesenson, Meyer; Pesenson, I.; Carey, S.; Roby, W.; McCollum, B.; Ingalls, J.; Ardila, D.; Teplitz, H.
2009-01-01
Effective analysis of large multispectral and multitemporal data sets demands new ways of data representation. We present applications of standard and original methods of data dimension reduction to astrophysical images (finding significant low-dimensional structures concealed in high-dimensional data). Such methods are widely used already outside of astronomy to effectively analyze large data sets. Data dimension reduction facilitates data organization, retrieval, and analysis (by improving statistical inference), which are crucial to multiwavelength astronomy, archival research, large-scale digital sky surveys and temporal astronomy. These methods allow a user to reduce a large number of FITS images, e.g. each representing a different wavelength, into a few images retaining more than 95% of the original visual information. An immediate simple application of this would be creating a multiwavelength "quick-look" image that includes all essential information in a statistically justified way, and thus is much more accurate than a "quick-look" made by simple coadding with an ad hoc, heuristic weighting. The dimensionally-reduced image is also naturally much smaller in file size in bytes than the total summed size of the non-dimensionally-reduced images. Thus dimensionally-reduced images offer an enormous savings in storage space and database-transmission bandwidth for the user. An analogous process of dimension reduction is possible for a large set of images obtained at the same wavelength but at different times (e.g. LSST images). Other applications of data dimension reduction include, but are not limited to, decorrelating data elements, removing noise, artifact separation, feature extraction, clustering and pattern classification in astronomical images. We demonstrate applications of the algorithms to test cases of current space-based IR data from the Spitzer Space Telescope.
Yan, Peng; Ji, Fangying; Wang, Jing; Fan, Jianping; Guan, Wei; Chen, Qingkong
2013-08-01
Sludge reduction technologies are increasingly important in wastewater treatment, but have some defects. In order to remedy them, a novel, integrated process including sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal was developed. The pilot-scale system was operated steadily at a treatment scale of 10 m(3)/d for 90 days. The results showed excellent nutrient removal, with average removal efficiencies for NH4(+)-N, TN, TP, and COD reaching 98.2 ± 1.34%, 75.5 ± 3.46%, 95.3 ± 1.65%, and 92.7 ± 2.49%, respectively. The ratio of mixed liquor volatile suspended solids (MLVSS) to mixed liquor suspended solids (MLSS) in the system gradually increased, from 0.33 to 0.52. The process effectively prevented the accumulation of inert or inorganic solids in activated sludge. Phosphorus was recovered as a crystalline product with aluminum ion from wastewater. The observed sludge yield Yobs of the system was 0.103 gVSS/g COD, demonstrating that the system's sludge reduction potential is excellent.
NASA Technical Reports Server (NTRS)
Cawthorn, J. M.; Brown, C. G.
1974-01-01
A study has been conducted of the future noise environment of Patric Henry Airport and its neighboring communities projected for the year 1990. An assessment was made of the impact of advanced noise reduction technologies which are currently being considered. These advanced technologies include a two-segment landing approach procedure and aircraft hardware modifications or retrofits which would add sound absorbent material in the nacelles of the engines or which would replace the present two- and three-stage fans with a single-stage fan of larger diameter. Noise Exposure Forecast (NEF) contours were computed for the baseline (nonretrofitted) aircraft for the projected traffic volume and fleet mix for the year 1990. These NEF contours are presented along with contours for a variety of retrofit options. Comparisons of the baseline with the noise reduction options are given in terms of total land area exposed to 30 and 40 NEF levels. Results are also presented of the effects on noise exposure area of the total number of daily operations.
Minimum variance system identification with application to digital adaptive flight control
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1975-01-01
A new on-line minimum variance filter for the identification of systems with additive and multiplicative noise is described which embodies both accuracy and computational efficiency. The resulting filter is shown to use both the covariance of the parameter vector itself and the covariance of the error in identification. A bias reduction scheme can be used to yield asymptotically unbiased estimates. Experimental results for simulated linearized lateral aircraft motion in a digital closed loop mode are presented, showing the utility of the identification schemes.
1998-01-01
This report presents the results of a U.S. Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project was conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The technologies demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NOx burner. The primary objective of the demonstration at Hammond Unit 4 was to determine the long-term effects of commercially available wall-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology were also performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications was established for the project. Short-term and long-term baseline testing was conducted in an {open_quotes}as-found{close_quotes} condition from November 1989 through March 1990. Following retrofit of the AOFA system during a four-week outage in spring 1990, the AOFA configuration was tested from August 1990 through March 1991. The FWEC CF/SF low NOx burners were then installed during a seven-week outage starting on March 8, 1991 and continuing to May 5, 1991. Following optimization of the LNBs and ancillary combustion equipment by FWEC personnel, LNB testing commenced during July 1991 and continued until January 1992. Testing in the LNB+AOFA configuration was completed during August 1993. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NOx burners and advanced overfire systems.
Not Available
1993-12-31
This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. Baseline, AOFA, LNB, and LNB plus AOFA test segments have been completed. Analysis of the 94 days of LNB long-term data collected show the full-load NO{sub x} emission levels to be approximately 0.65 lb/MBtu with fly ash LOI values of approximately 8 percent. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. For comparison, the long-term full-load, baseline NO{sub x} emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing in the LNB+AOFA configuration indicate that at full-load, NO{sub x} emissions and fly ash LOI are near 0.40 lb/MBtu and 8 percent, respectively. However, it is believed that a substantial portion of the incremental change in NO{sub x} emissions between the LNB and LNB+AOFA configurations is the result of additional burner tuning and other operational adjustments and is not the result of the AOFA system. During this quarter, LNB+AOFA testing was concluded. Testing performed during this quarter included long-term and verification testing in the LNB+AOFA configuration.
Tavoulareas, E.S.; Hardman, R.; Eskinazi, D.; Smith, L.
1994-02-01
This report provides the key findings of the Innovative Clean Coal Technology (ICCT) demonstration project at Gulf Power`s Lansing Smith Unit No. 2 and the implications for other tangentially-fired boilers. L. Smith Unit No. 2 is a 180 MW tangentially-fired boiler burning Eastern Bituminous coal, which was retrofitted with Asea Brown Boveri/Combustion Engineering Services` (ABB/CE) LNCFS I, II, and III technologies. An extensive test program was carried-out with US Department of Energy, Southern Company and Electric Power Research Institute (EPRI) funding. The LNCFS I, II, and III achieved 37 percent, 37 percent, and 45 percent average long-term NO{sub x} emission reduction at full load, respectively (see following table). Similar NO{sub x} reduction was achieved within the control range (100--200 MW). However, below the control point (100 MW), NO{sub x} emissions with the LNCFS technologies increased significantly, reaching pre-retrofit levels at 70 MW. Short-term testing proved that low load NO{sub x} emissions could be reduced further by using lower excess O{sub 2} and burner tilt, but with adversed impacts on unit performance, such as lower steam outlet temperatures and, potentially, higher CO emissions and LOI.
Terpos, E; Migkou, M; Christoulas, D; Gavriatopoulou, M; Eleutherakis-Papaiakovou, E; Kanellias, N; Iakovaki, M; Panagiotidis, I; Ziogas, D C; Fotiou, D; Kastritis, E; Dimopoulos, M A
2016-01-01
Circulating vascular cell adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 (ICAM-1) and selectins were prospectively measured in 145 newly-diagnosed patients with symptomatic myeloma (NDMM), 61 patients with asymptomatic/smoldering myeloma (SMM), 47 with monoclonal gammopathy of undetermined significance (MGUS) and 87 multiple myeloma (MM) patients at first relapse who received lenalidomide- or bortezomib-based treatment (RD, n=47; or VD, n=40). Patients with NDMM had increased VCAM-1 and ICAM-1 compared with MGUS and SMM patients. Elevated VCAM-1 correlated with ISS-3 and was independently associated with inferior overall survival (OS) (45 months for patients with VCAM-1 >median vs 75 months, P=0.001). MM patients at first relapse had increased levels of ICAM-1 and L-selectin, even compared with NDMM patients and had increased levels of VCAM-1 compared with MGUS and SMM. Both VD and RD reduced dramatically serum VCAM-1 after four cycles of therapy, but only VD reduced serum ICAM-1, irrespective of response to therapy. The reduction of VCAM-1 was more pronounced after RD than after VD. Our study provides evidence for the prognostic value of VCAM-1 in myeloma patients, suggesting that VCAM-1 could be a suitable target for the development of anti-myeloma therapies. Furthermore, the reduction of VCAM-1 and ICAM-1 by RD and VD supports the inhibitory effect of these drugs on the adhesion of MM cells to stromal cells. PMID:27232930
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
NASA Astrophysics Data System (ADS)
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
Increased spatial variance accompanies reorganization of two continental shelf ecosystems.
Litzow, Michael A; Urban, J Daniel; Laurel, Benjamin J
2008-09-01
Phase transitions between alternate stable states in marine ecosystems lead to disruptive changes in ecosystem services, especially fisheries productivity. We used trawl survey data spanning phase transitions in the North Pacific (Gulf of Alaska) and the North Atlantic (Scotian Shelf) to test for increases in ecosystem variability that might provide early warning of such transitions. In both time series, elevated spatial variability in a measure of community composition (ratio of cod [Gadus sp.] abundance to prey abundance) accompanied transitions between ecosystem states, and variability was negatively correlated with distance from the ecosystem transition point. In the Gulf of Alaska, where the phase transition was apparently the result of a sudden perturbation (climate regime shift), variance increased one year before the transition in mean state occurred. On the Scotian Shelf, where ecosystem reorganization was the result of persistent overfishing, a significant increase in variance occurred three years before the transition in mean state was detected. However, we could not reject the alternate explanation that increased variance may also have simply been inherent to the final stable state in that ecosystem. Increased variance has been previously observed around transition points in models, but rarely in real ecosystems, and our results demonstrate the possible management value in tracking the variance of key parameters in exploited ecosystems.
Guigues, Nathalie; Desenfant, Michèle; Hance, Emmanuel
2013-09-01
The objective of this paper was to demonstrate how multivariate statistics combined with the analysis of variance could support decision-making during the process of redesigning a water quality monitoring network with highly heterogeneous datasets in terms of time and space. Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) were selected to optimise the selection of water quality parameters to be monitored as well as the number and location of monitoring stations. Sampling frequency was specifically investigated through the analysis of variance. The data used were obtained between 2007 and 2010 at the Long-term Environmental Research Monitoring and Testing System (OPE) located in the north-eastern part of France in relation with a geological disposal of radioactive waste project. PCA results showed that no substantial reduction among the parameters was possible as strong correlation only exists between electrical conductivity, calcium or bicarbonates. HCA results were geospatially represented for each field campaign and compared to one another in terms of similarities and differences allowing us to group the monitoring stations into 12 categories. This approach enabled us to take into account not only the spatial variability of water quality but also its temporal variability. Finally, the analysis of variances showed that three very different behaviours occurred: parameters with high temporal variability and low spatial variability (e.g. suspended matter), parameters with high spatial variability and average temporal variability (e.g. calcium) and finally parameters with both high temporal and spatial variability (e.g. nitrate).
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Bottleneck Effects on Genetic Variance for Courtship Repertoire
Meffert, L. M.
1995-01-01
Bottleneck effects on evolutionary potential in mating behavior were addressed through assays of additive genetic variances and resulting phenotypic responses to drift in the courtship repertoires of six two-pair founder-flush lines and two control populations of the housefly. A simulation addressed the complication that an estimate of the genetic variance for a courtship trait (e.g., male performance vigor or the female requirement for copulation) must involve assays against the background behavior of the mating partners. The additive ``environmental'' effect of the mating partner's phenotype simply dilutes the net parent-offspring covariance for a trait. However, if there is an interaction with this ``environmental'' component, negative parent-offspring covariances can result under conditions of high incompatibility between the population's distributions for male performance and female choice requirements, despite high levels of genetic variance. All six bottlenecked lines exhibited significant differentiation from the controls in at least one measure of the parent-offspring covariance for male performance or female choice (estimated by 50 parent-son and 50 parent-daughter covariances for 10 courtship traits per line) which translated to significant phenotypic drift. However, the average effect across traits or across lines did not yield a significant net increase in genetic variance due to bottlenecks. Concerted phenotypic differentiation due to the founder-flush event provided indirect evidence of directional dominance in a subset of traits. Furthermore, indirect evidence of genotype-environment interactions (potentially producing genotype-genotype effects) was found in the negative parent-offspring covariances predicted by the male-female interaction simulation and by the association of the magnitude of phenotypic drift with the absolute value of the parent-offspring covariance. Hence, nonadditive genetic effects on mating behavior may be important in
Enhancing area of review capabilities: Implementing a variance program
De Leon, F.
1995-12-01
The Railroad Commission of Texas (RRC) has regulated oil-field injection well operations since issuing its first injection permit in 1938. The Environmental Protection Agency (EPA) granted the RRC primary enforcement responsibility for the Class H Underground Injection Control (UIC) Program in April 1982. At that time, the added level of groundwater protection afforded by an Area of Review (AOR) on previously permitted Class H wells was not deemed necessary or cost effective. A proposed EPA rule change will require AORs to be performed on all pre-primacy Class II wells unless a variance can be justified. A variance methodology has been developed by researchers at the University of Missouri-Rolla in conjunction with the American Petroleum Institute (API). This paper will outline the RRC approach to implementing the AOR variance methodology. The RRC`s UIC program tracks 49,256 pre-primacy wells. Approximately 25,598 of these wells have active permits and will be subject to the proposed AOR requirements. The potential workload of performing AORs or granting variances for this many wells makes the development of a Geographic Information System (GIS) imperative. The RRC has recently completed a digitized map of the entire state and has spotted 890,000 of an estimated 1.2 million wells. Integrating this digital state map into a GIS will allow the RRC to tie its many data systems together. Once in place, this integrated data system will be used to evaluate AOR variances for pre-primacy wells on a field-wide basis. It will also reduce the regulatory cost of permitting by allowing the RRC staff to perform AORs or grant variances for the approximately 3,000 new and amended permit applications requiring AORs each year.
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
Entropy, Fisher Information and Variance with Frost-Musulin Potenial
NASA Astrophysics Data System (ADS)
Idiodi, J. O. A.; Onate, C. A.
2016-09-01
This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.
Studying Variance in the Galactic Ultra-compact Binary Population
NASA Astrophysics Data System (ADS)
Larson, Shane; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
The principle of stationary variance in quantum field theory
NASA Astrophysics Data System (ADS)
Siringo, Fabio
2014-02-01
The principle of stationary variance is advocated as a viable variational approach to quantum field theory (QFT). The method is based on the principle that the variance of energy should be at its minimum when the state of a quantum system reaches its best approximation for an eigenstate. While not too much popular in quantum mechanics (QM), the method is shown to be valuable in QFT and three special examples are given in very different areas ranging from Heisenberg model of antiferromagnetism (AF) to quantum electrodynamics (QED) and gauge theories.
Chen, Jiangyao; Huang, Yong; Li, Guiying; An, Taicheng; Hu, Yunkun; Li, Yunlu
2016-01-25
Volatile organic compounds (VOCs) emitted during the electronic waste dismantling process (EWDP) were treated at a pilot scale, using integrated electrostatic precipitation (EP)-advanced oxidation technologies (AOTs, subsequent photocatalysis (PC) and ozonation). Although no obvious alteration was seen in VOC concentration and composition, EP technology removed 47.2% of total suspended particles, greatly reducing the negative effect of particles on subsequent AOTs. After the AOT treatment, average removal efficiencies of 95.7%, 95.4%, 87.4%, and 97.5% were achieved for aromatic hydrocarbons, aliphatic hydrocarbons, halogenated hydrocarbons, as well as nitrogen- and oxygen-containing compounds, respectively, over 60-day treatment period. Furthermore, high elimination capacities were also seen using hybrid technique of PC with ozonation; this was due to the PC unit's high loading rates and excellent pre-treatment abilities, and the ozonation unit's high elimination capacity. In addition, the non-cancer and cancer risks, as well as the occupational exposure cancer risk, for workers exposed to emitted VOCs in workshop were reduced dramatically after the integrated technique treatment. Results demonstrated that the integrated technique led to highly efficient and stable VOC removal from EWDP emissions at a pilot scale. This study points to an efficient approach for atmospheric purification and improving human health in e-waste recycling regions.
Benedek, K.; Flytzani-Stephanopoulos, M.
1996-02-01
The team of Arthur D. Little, Tufts University and Engelhard Corporation will be conducting Phase I of a four and a half year, two-phase effort to develop and scale-up an advanced byproduct recovery technology that is a direct, single-stage, catalytic process for converting sulfur dioxide to elemental sulfur. this catalytic process reduces SO{sub 2} over a fluorite-type oxide (such as ceria or zirconia). The catalytic activity can be significantly promoted by active transition metals, such as copper. More than 95% elemental sulfur yield, corresponding to almost complete sulfur dioxide conversion, was obtained over a Cu-Ce-O oxide catalyst as part of an ongoing DOE-sponsored University Coal Research Program. This type of mixed metal oxide catalyst has stable activity, high selectivity for sulfur production, and is resistant to water and carbon dioxide poisoning. Tests with CO and CH{sub 4} reducing gases indicates that the catalyst has the potential for flexibility with regard to the composition of the reducing gas, making it attractive for utility use. the performance of the catalyst is consistently good over a range of SO{sub 2} inlet concentration (0.1 to 10%) indicating its flexibility in treating SO{sub 2} tail gases as well as high concentration streams.
NASA Astrophysics Data System (ADS)
Bobrowska, Alicja; Domonik, Andrzej
2015-09-01
In constructions, the usefulness of modern technical diagnostics of stone as a raw material requires predicting the effects of long-term environmental impact of its qualities and geomechanical properties. The paper presents geomechanical research enabling presentation of the factors for strength loss of the stone and forecasting the rate of development of destructive phenomena on the stone structure on a long-time basis. As research material Turkish travertines were selected from the Denizli-Kaklık Basin (Pamukkale and Hierapolis quarries), which have been commonly used for centuries in global architecture. The rock material was subjected to testing of the impact of various environmental factors, as well as European standards recommended by the author of the research program. Their resistance to the crystallization of salts from aqueous solutions and the effects of SO2, as well as the effect of frost and high temperatures are presented. The studies allowed establishing the following quantitative indicators: the ultrasonic waves index (IVp) and the strength reduction index (IRc). Reflections on the assessment of deterioration effects indicate that the most active factors decreasing travertine resistance in the aging process include frost and sulphur dioxide (SO2). Their negative influence is particularly intense when the stone material is already strongly weathered.
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a "black-box", i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Not Available
1991-12-31
ABB CE`s Low NOx Bulk Furnace Staging (LNBFS) System and Low NOx Concentric Firing System (LNCFS) are demonstrated in stepwise fashion. These systems incorporate the concept of advanced overfire air (AOFA), clustered coal nozzles, and offset air. A complete description of the installed technologies is provided in the following section. The primary objective of the Plant Lansing Smith demonstration is to determine the long-term effects of commercially available tangentially-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology are also being performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications has been established for the project.
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
Variance Components for NLS: Partitioning the Design Effect.
ERIC Educational Resources Information Center
Folsom, Ralph E., Jr.
This memorandum demonstrates a variance components methodology for partitioning the overall design effect (D) for a ratio mean into stratification (S), unequal weighting (W), and clustering (C) effects, so that D = WSC. In section 2, a sample selection scheme modeled after the National Longitudinal Study of the High School Class of 1972 (NKS)…
Allan Variance Calculation for Nonuniformly Spaced Input Data
2015-01-01
Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The Allan Variance ( AV ) characterizes the...temporal randomness in sensor output data streams at various times scales. The conventional formula for calculating the AV assumes that the data...presents a modified approach to AV calculation, which accommodates nonuniformly spaced time samples. The basic concept of the modified approach is
Variance in Math Achievement Attributable to Visual Cognitive Constructs
ERIC Educational Resources Information Center
Oehlert, Jeremy J.
2012-01-01
Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…
Temporal Relation Extraction in Outcome Variances of Clinical Pathways.
Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio
2015-01-01
Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization.
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... issue a denial. Such notice shall include a statement of reasons for the proposed denial, and...
Numbers Of Degrees Of Freedom Of Allan-Variance Estimators
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
1992-01-01
Report discusses formulas for estimation of Allan variances. Presents algorithms for closed-form approximations of numbers of degrees of freedom characterizing results obtained when various estimators applied to five power-law components of classical mathematical model of clock noise.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2013-07-01 2013-07-01 false Variances....
Code of Federal Regulations, 2014 CFR
2014-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Variances....
Code of Federal Regulations, 2010 CFR
2010-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances....
Code of Federal Regulations, 2012 CFR
2012-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true Variances....
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Genetic Variance in the SES-IQ Correlation.
ERIC Educational Resources Information Center
Eckland, Bruce K.
1979-01-01
Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Infinite variance in fermion quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2014 CFR
2014-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
Perspective projection for variance pose face recognition from camera calibration
NASA Astrophysics Data System (ADS)
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
Dominance, Information, and Hierarchical Scaling of Variance Space.
ERIC Educational Resources Information Center
Ceurvorst, Robert W.; Krus, David J.
1979-01-01
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Explaining Common Variance Shared by Early Numeracy and Literacy
ERIC Educational Resources Information Center
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
The Threat of Common Method Variance Bias to Theory Building
ERIC Educational Resources Information Center
Reio, Thomas G., Jr.
2010-01-01
The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... from any requirement of an applicable implementation plan with respect to a stationary source....
Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.
2016-01-01
In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS PROCEDURES...) When a State issues a permit on which EPA has made a variance decision, separate appeals of the State... issues in both proceedings, the Regional Administrator will decide, in consultation with State...
Exploratory Multivariate Analysis of Variance: Contrasts and Variables.
ERIC Educational Resources Information Center
Barcikowski, Robert S.; Elliott, Ronald S.
The contribution of individual variables to overall multivariate significance in a multivariate analysis of variance (MANOVA) is investigated using a combination of canonical discriminant analysis and Roy-Bose simultaneous confidence intervals. Difficulties with this procedure are discussed, and its advantages are illustrated using examples based…
20 CFR 901.40 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... issue a denial. Such notice shall include a statement of reasons for the proposed denial, and...
36 CFR 30.5 - Variances, exceptions, and use permits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... OF THE INTERIOR WHISKEYTOWN-SHASTA-TRINITY NATIONAL RECREATION AREA: ZONING STANDARDS FOR WHISKEYTOWN UNIT § 30.5 Variances, exceptions, and use permits. (a) Zoning ordinances or amendments thereto, for the zoning districts comprising the Whiskeytown Unit of the Whiskeytown-Shasta-Trinity...
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... pattern inconsistent with the objectives of sound flood plain management, the Federal Insurance... (i) a showing of good and sufficient cause, (ii) a determination that failure to grant the variance... public expense, create nuisances, cause fraud on or victimization of the public, or conflict...
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... pattern inconsistent with the objectives of sound flood plain management, the Federal Insurance... (i) a showing of good and sufficient cause, (ii) a determination that failure to grant the variance... public expense, create nuisances, cause fraud on or victimization of the public, or conflict...
Not Available
1992-12-31
The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulatecharacteristics and boiler efficiency. Baseline, AOFA, and LNB without AOFA test segments have been completed. Analysis of the 94 days of LNB long-term data collected show the full-load NO{sub x} emission levels to be approximately 0.65 lb/MBtu. Flyash LOI values for the LNB configuration are approximately 8 percent at full-load. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. Abbreviated diagnostic tests for the LNB+AOFA configuration indicate that at 500 MWe, NO{sub x} emissions are approximately 0.55 lb/MBtu with corresponding flyash LOI values of approximately 11 percent. For comparison, the long-term full-load, baseline NO{sub x} emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing of the LNB+AOFA configuration will be performed when the stackparticulate emissions issue is resolved. Testing of a process optimization package on Plant Hammond Unit 4 was performed during this quarter. The software was configured to minimize NO{sub x} emissions using total combustion air flow and advanced overfire air distribution as the controlled parameters. Preliminary results from this testing indicate that this package shows promise in reducing NO{sub x} emissions while maintaining or improving other boiler performance parameters.
1978-12-01
AD-A041/70 ,4 Poperty of US Air .For, AAIWZ L1 brary AFFDLTR 78-179 ’Wrlght.Peatt Orson AF’B, EFFECT OF VARIANCES AND MANUFACTURING TOLERANCES ON...Degradation For Advanced Composites", Lockheed-California F33615-77-C-3084, Quar- terlies 1977 to Present. Phillips, D. C. and Scott , J. M., "The Shear
29 CFR 1905.11 - Variances and other relief under section 6(d).
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND..., Limitations, Variations, Tolerances, Exemptions and Other Relief § 1905.11 Variances and other relief...
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
Hydrograph variances over different timescales in hydropower production networks
NASA Astrophysics Data System (ADS)
Zmijewski, Nicholas; Wörman, Anders
2016-08-01
The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of <1 week, depending on the Peclet number (Pe) of the stream reach. This implies that flow variance becomes more erratic (closer to white noise) as a result of current production objectives.
Variance in the reproductive success of dominant male mountain gorillas.
Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M
2014-10-01
Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species.
Kones, Richard
2010-01-01
The objectives in treating angina are relief of pain and prevention of disease progression through risk reduction. Mechanisms, indications, clinical forms, doses, and side effects of the traditional antianginal agents – nitrates, β-blockers, and calcium channel blockers – are reviewed. A number of patients have contraindications or remain unrelieved from anginal discomfort with these drugs. Among newer alternatives, ranolazine, recently approved in the United States, indirectly prevents the intracellular calcium overload involved in cardiac ischemia and is a welcome addition to available treatments. None, however, are disease-modifying agents. Two options for refractory angina, enhanced external counterpulsation and spinal cord stimulation (SCS), are presented in detail. They are both well-studied and are effective means of treating at least some patients with this perplexing form of angina. Traditional modifiable risk factors for coronary artery disease (CAD) – smoking, hypertension, dyslipidemia, diabetes, and obesity – account for most of the population-attributable risk. Individual therapy of high-risk patients differs from population-wide efforts to prevent risk factors from appearing or reducing their severity, in order to lower the national burden of disease. Current American College of Cardiology/American Heart Association guidelines to lower risk in patients with chronic angina are reviewed. The Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE) trial showed that in patients with stable angina, optimal medical therapy alone and percutaneous coronary intervention (PCI) with medical therapy were equal in preventing myocardial infarction and death. The integration of COURAGE results into current practice is discussed. For patients who are unstable, with very high risk, with left main coronary artery lesions, in whom medical therapy fails, and in those with acute coronary syndromes, PCI is indicated. Asymptomatic
Noam Lior; Stuart W. Churchill
2003-10-01
the Gordon Conference on Modern Development in Thermodynamics. The results obtained are very encouraging for the development of the RCSC as a commercial burner for significant reduction of NO{sub x} emissions, and highly warrants further study and development.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
A surface layer variance heat budget for ENSO
NASA Astrophysics Data System (ADS)
Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.
2015-05-01
Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
The Third-Difference Approach to Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1995-01-01
This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.
Stochastic variance models in discrete time with feedforward neural networks.
Andoh, Charles
2009-07-01
The study overcomes the estimation difficulty in stochastic variance models for discrete financial time series with feedforward neural networks. The volatility function is estimated semiparametrically. The model is used to estimate market risk, taking into account not only the time series of interest but extra information on the market. As an application, some stock prices series are studied and compared with the nonlinear ARX-ARCHX model.
Selection and genetic (co)variance in bighorn sheep.
Coltman, David W; O'Donoghue, Paul; Hogg, John T; Festa-Bianchet, Marco
2005-06-01
Genetic theory predicts that directional selection should deplete additive genetic variance for traits closely related to fitness, and may favor the maintenance of alleles with antagonistically pleiotropic effects on fitness-related traits. Trait heritability is therefore expected to decline with the degree of association with fitness, and some genetic correlations between selected traits are expected to be negative. Here we demonstrate a negative relationship between trait heritability and association with lifetime reproductive success in a wild population of bighorn sheep (Ovis canadensis) at Ram Mountain, Alberta, Canada. Lower heritability for fitness-related traits, however, was not wholly a consequence of declining genetic variance, because those traits showed high levels of residual variance. Genetic correlations estimated between pairs of traits with significant heritability were positive. Principal component analyses suggest that positive relationships between morphometric traits constitute the main axis of genetic variation. Trade-offs in the form of negative genetic or phenotypic correlations among the traits we have measured do not appear to constrain the potential for evolution in this population.
Relationship between Allan variances and Kalman Filter parameters
NASA Technical Reports Server (NTRS)
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
Sample variance of non-Gaussian sky distributions
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1995-02-01
Non-Gaussian distributions of cosmic microwave background (CMB) anistropics have been proposed to reconcile the discrepancies between different experiments at half-degree scales (Coulson et al. 1994). Each experiment probes a different part of the sky, furthermore, sky coverage is very small, hence the sample variance of each experiment can be large, especially when the sky signal is non-Gaussian. We model the degree-scale CMB sky as a chin exp 2 field with n-degress of freedom and show that the sample variance is enhanced over that a Gaussian distribution by a factor of (n + 6)/n. The sample variance for different experiments are calculated, both for Gaussian and non-Gaussian distributions. We also show that if the distribution is highly non-Gaussian (n less than or approximately = 4) at half-degree scales, than the non-Gaussian signature of the CMB could be detected in the FIRS map, though probably not in the Cosmic Background Explorer (COBE) map.
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D.; Seitz, S.; Becker, M. R.; ...
2015-04-13
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M200m ≈ 1014…1015h–1M⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variationsmore » are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M200m ≈ 1015h–1M⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less
Analysis of micrometeorological data using a two sample variance
NASA Astrophysics Data System (ADS)
Werle, Peter; Falge, Eva
2010-05-01
In ecosystem research infrared gas analyzers are increasingly used to measure fluxes of carbon dioxide, water vapour, methane, nitrous oxide and even stable carbon isotopes. As these complex measurement devices under field conditions cannot be considered as absolutely stable, drift characterisation is an issue to distinguish between atmospheric data and sensor drift. In this paper the concept of the two sample variance is utilized in analogy to previous stability investigations to characterize the stationarity of both, spectroscopic measurements of concentration time series and micrometeorological data in the time domain, which is a prerequisite for covariance calculations. As an example, the method is applied to assess the time constant for detrending of time series data and the optimum trace gas flux integration time. The method described here provides information similar to existing characterizations as the ogive analysis, the normalized error variance of the second order moment and the spectral characteristics of turbulence in the inertial subrange. The method is easy to implement and, therefore, well suited to assist as a useful tool for a routine data quality check for both, new practitioners and experts in the field. Werle, P., Time domain characterization of micrometeorological data based on a two sample variance. Agric. Forest Meteorol. (2010), doi:10.1016/j.agrformet.2009.12.007
Dynamic Programming Using Polar Variance for Image Segmentation.
Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J
2016-10-06
When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.
42 CFR 488.64 - Remote facility variances for utilization review requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... applicable. (c) The request for variance shall document the requesting facility's inability to meet the... the previous six months; (4) As relevant to the request, the names of all physicians on the active... variance. (h) The Secretary, in granting a variance, will specify the period for which the variance...
Blangero, John; Diego, Vincent P.; Dyer, Thomas D.; Almeida, Marcio; Peralta, Juan; Kent, Jack W.; Williams, Jeff T.; Almasy, Laura; Göring, Harald H. H.
2014-01-01
Statistical genetic analysis of quantitative traits in large pedigrees is a formidable computational task due to the necessity of taking the non-independence among relatives into account. With the growing awareness that rare sequence variants may be important in human quantitative variation, heritability and association study designs involving large pedigrees will increase in frequency due to the greater chance of observing multiple copies of rare variants amongst related individuals. Therefore, it is important to have statistical genetic test procedures that utilize all available information for extracting evidence regarding genetic association. Optimal testing for marker/phenotype association involves the exact calculation of the likelihood ratio statistic which requires the repeated inversion of potentially large matrices. In a whole genome sequence association context, such computation may be prohibitive. Toward this end, we have developed a rapid and efficient eigensimplification of the likelihood that makes analysis of family data commensurate with the analysis of a comparable sample of unrelated individuals. Our theoretical results which are based on a spectral representation of the likelihood yield simple exact expressions for the expected likelihood ratio test statistic (ELRT) for pedigrees of arbitrary size and complexity. For heritability, the ELRT is: −∑ln[1+ĥ2(λgi−1)], where ĥ2 and λgi are respectively the heritability and eigenvalues of the pedigree-derived genetic relationship kernel (GRK). For association analysis of sequence variants, the ELRT is given by ELRT[hq2>0:unrelateds]−(ELRT[ht2>0:pedigrees]−ELRT[hr2>0:pedigrees]), where ht2,hq2, and hr2 are the total, quantitative trait nucleotide, and residual heritabilities, respectively. Using these results, fast and accurate analytical power analyses are possible, eliminating the need for computer simulation. Additional benefits of eigensimplification include a simple method for calculation of the exact distribution of the ELRT under the null hypothesis which turns out to differ from that expected under the usual asymptotic theory. Further, when combined with the use of empirical GRKs—estimated over a large number of genetic markers— our theory reveals potential problems associated with non positive semi-definite kernels. These procedures are being added to our general statistical genetic computer package, SOLAR. PMID:23419715
Variance of Dispersion Coefficients in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Dentz, Marco; De Barros, Felipe P. J.
2013-04-01
We study the dispersion of a passive solute in heterogeneous porous media using a stochastic modeling approach. Heterogeneity on one hand leads to an increase of solute spreading, which is described by the well-known macrodispersion phenomenon. On the other hand, it induces uncertainty about the dispersion behavior, which is quantified by ensemble averages over suitably defined dispersion coefficients in single medium realizations. We focus here on the sample to sample fluctuations of dispersion coefficients about their ensemble mean values for solutes evolving from point-like and extended source distributions in d = 2 and d = 3 spatial dimensions. The definition of dispersion coefficients in single medium realizations for finite source sizes is not unique, unlike for point-like sources. Thus, we first discuss a series of dispersion measures, which describe the extension of the solute plume, as well as dispersion measures that quantify the solute dispersion relative to the injection point. The sample to sample fluctuations of these observables are quantified in terms of the variance with respect to their ensemble averages. We find that the ensemble averages of these dispersion measures may be identical, their fluctuation behavior, however, may be very different. This is quantified using perturbation expansions in the fluctuations of the random flow field. We derive explicit expressions for the time evolution of the variance of the dispersion coefficients. The characteristic time scale for the variance evolution is given by the typical dispersion time over the characteristic heterogeneity scale and the dimensions of the source. We find that the dispersion variances asymptotically decrease to zero in d = 3 dimensions, which means, the dispersion coefficients are self-averaging observables, at least for moderate heterogeneity. In d = 2 dimensions, the variance converges towards a finite asymptotic value that is independent of the source distribution. Dispersion is not
A proxy for variance in dense matching over homogeneous terrain
NASA Astrophysics Data System (ADS)
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Litzow, Michael A.; Piatt, J.F.
2003-01-01
We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.
A new approach for crop identification with wavelet variance and JM distance.
Qiu, Bingwen; Fan, Zhanling; Zhong, Ming; Tang, Zhenghong; Chen, Chongcheng
2014-11-01
This paper develops a new crop mapping method through combined utilization of both time and frequency information based on wavelet variance and Jeffries-Matusita (JM) distance (CIWJ for short). A two-dimensional wavelet spectrum was obtained from datasets of daily continuous vegetation indices through a continuous wavelet transform using the Mexican hat and the Morlet mother wavelets. The time-average wavelet variance (TAWV) and the scale-average wavelet variance (SAWV) were then calculated based on the wavelet spectrum of the Mexican hat and the Morlet wavelet, respectively. The class separability based on the JM distance was evaluated to discriminate the proper period or scale range applied. Finally, a procedure for criteria quantification was developed using the TAWV and SAWV as the major metrics, and the similarity between unclassified pixels and established land use/cover types was calculated. The proposed CIWJ method was applied to the middle Hexi Corridor in northwest China using 250-m 8-day composite moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index (EVI) time series datasets in 2012. The CIWJ method was shown to be efficient in crop field mapping, with an overall accuracy of 83.6 % and kappa coefficient of 0.7009, assessed with 30 m Chinese Environmental Disaster Reduction Satellite (HJ-1)-derived data. Compared with methods utilizing information on either frequency or time, the CIWJ method demonstrates tremendous potential for efficient crop mapping and for further applications. This method could be applied to either coarse or high spatial resolution images for agricultural crop identification, as well as other more general or specific land use classifications.
Loberg, A; Dürr, J W; Fikse, W F; Jorjani, H; Crooks, L
2015-10-01
The amount of variance captured in genetic estimations may depend on whether a pedigree-based or genomic relationship matrix is used. The purpose of this study was to investigate the genetic variance as well as the variance of predicted genetic merits (PGM) using pedigree-based or genomic relationship matrices in Brown Swiss cattle. We examined a range of traits in six populations amounting to 173 population-trait combinations. A main aim was to determine how using different relationship matrices affect variance estimation. We calculated ratios between different types of estimates and analysed the impact of trait heritability and population size. The genetic variances estimated by REML using a genomic relationship matrix were always smaller than the variances that were similarly estimated using a pedigree-based relationship matrix. The variances from the genomic relationship matrix became closer to estimates from a pedigree relationship matrix as heritability and population size increased. In contrast, variances of predicted genetic merits obtained using a genomic relationship matrix were mostly larger than variances of genetic merit predicted using pedigree-based relationship matrix. The ratio of the genomic to pedigree-based PGM variances decreased as heritability and population size rose. The increased variance among predicted genetic merits is important for animal breeding because this is one of the factors influencing genetic progress.
Ultrasonic beam fluctuation and flaw signal variance in inhomogeneous media
NASA Astrophysics Data System (ADS)
Ahmed, S.; Roberts, R.; Margetan, F.
2000-05-01
This paper examines the effect of forward scattering on ultrasonic beam propagation and flaw signal amplitude in inhomogeneous material microstructures. A beam propagating through a weakly-scattering, randomly inhomogeneous medium will display random fluctuations in amplitude and phase, attributable to forward scattering. Correspondingly, the signal received from a given flaw at a given position in the beam volume will fluctuate as the beam and flaw are simultaneously scanned throughout the volume of an inhomogeneous host medium. These effects have been prominently observed in the inspection of titanium. For example, maps of beam amplitude profiles after transmission through titanium reveal severe distortion of beam amplitude and phase. Similarly, signals from "identical" flat bottom holes (FBH) at equal depths but different lateral positions in titanium display a random variation in amplitude. Interestingly, it has been noted that this FBH signal variance varies inversely to the beam diameter, that is, signal variance normalized to the mean signal amplitude is a minimum when the flaw is in the focal zone of a focused bearn. As this observation has great significance to the inspection of titanium, a model, prediction of this phenomenon is being sought. In the work reported here, beam propagation is formulated as a volumetric integral equation employing the Green function for the homogeneous spatial mean of the medium. The integral equation is solved using iterative methods. Preliminary work considering scalar two-dimensional propagation in inhomogeneous media has predicted a flaw signal variance that displays an inverse relation to beam diameter, thus reproducing the qualitative behavior seen in experimental data in titanium. Current work is extending the preliminary two-dimensional scalar result to three-dimensional elasticity, representing propagation in an actual titanium microstructure. Progress on this effort will be reported.
Minimum Variance Approaches to Ultrasound Pixel-Based Beamforming.
Nguyen, Nghia Q; Prager, Richard W
2017-02-01
We analyze the principles underlying minimum variance distortionless response (MVDR) beamforming in order to integrate it into a pixel-based algorithm. There is a challenge posed by the low echo signal-to-noise ratio (eSNR) when calculating beamformer contributions at pixels far away from the beam centreline. Together with the well-known scarcity of samples for covariance matrix estimation, this reduces the beamformer performance and degrades the image quality. To address this challenge, we implement the MVDR algorithm in two different ways. First, we develop the conventional minimum variance pixel-based (MVPB) beamformer that performs the MVDR after the pixel-based superposition step. This involves a combination of methods in the literature, extended over multiple transmits to increase the eSNR. Then we propose the coherent MVPB beamformer, where the MVDR is applied to data within individual transmits. Based on pressure field analysis, we develop new algorithms to improve the data alignment and matrix estimation, and hence overcome the low-eSNR issue. The methods are demonstrated on data acquired with an ultrasound open platform. The results show the coherent MVPB beamformer substantially outperforms the conventional MVPB in a series of experiments, including phantom and in vivo studies. Compared to the unified pixel-based beamformer, the newest delay-and-sum algorithm in [1], the coherent MVPB performs well on regions that conform to the diffuse scattering assumptions on which the minimum variance principles are based. It produces less good results for parts of the image that are dominated by specular reflections.
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D.; Seitz, S.; Becker, M. R.; Friedrich, O.; Mana, A.
2015-04-13
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_{200m} ≈ 10^{14}…10^{15}h^{–1}M_{⊙}, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M_{200m} ≈ 10^{15}h^{–1}M_{⊙} and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.
Not Available
1993-12-31
The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. Baseline, AOFA, and LNB without AOFA test segments have been completed. Analysis of the 94 days of LNB long-term data collected show the full-load NO{sub x} emission levels to be approximately 0.65 lb/MBtu with flyash LOI values of approximately 8 percent. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. For comparison, the long-term full-load, baseline NO{sub x} emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing of the LNB plus AOFA configuration began in May 1993 and is scheduled to end during August 1993. As of June 30, the diagnostic, performance, chemical emissions tests segments for this configuration have been conducted and 29 days of long-term, emissions data collected. Preliminary results from the May--June 1993 tests of the LNB plus AOFA system show that the full load NO{sub x} emissions are approximately 0.42 lb/MBtu with corresponding fly ash LOI values near 8 percent. This is a substantial improvement in both NO{sub x} emissions and LOI values when compared to the results obtained during the February--March 1992 abbreviated testing of this system.
The Reduction of Advanced Military Aircraft Noise
2011-12-01
the total pressure upstream of the nozzle. The facility uses helium - air jet mixtures to simulate heated air jets. The partial pressures of both the...the tank , and then the air flow is regulated via pressure regulators and control valves located in a piping cabinet before being fed to a plenum and...provide the helium -air mixture jets in order to simulate the heated jets. The individually partial pressures of the helium and air are both
Compression station upgrades include advanced noise reduction
Dunning, V.R.; Sherikar, S.
1998-10-01
Since its inception in the mid-`80s, AlintaGas` Dampier to Bunbury natural gas pipeline has been constantly undergoing a series of upgrades to boost capacity and meet other needs. Extending northward about 850 miles from near Perth to the northwest shelf, the 26-inch line was originally served by five compressor stations. In the 1989-91 period, three new compressor stations were added to increase capacity and a ninth station was added in 1997. Instead of using noise-path-treatment mufflers to reduce existing noise, it was decided to use noise-source-treatment technology to prevent noise creation in the first place. In the field, operation of these new noise-source treatment attenuators has been very quiet. If there was any thought earlier of guaranteed noise-level verification, it is not considered a priority now. It`s also anticipated that as AlintaGas proceeds with its pipeline and compressor station upgrade program, similar noise-source treatment equipment will be employed and retrofitted into older stations where the need to reduce noise and potential radiant-heat exposure is indicated.
Multi-observable Uncertainty Relations in Product Form of Variances
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Improved Robustness through Population Variance in Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Matthews, David C.; Sutton, Andrew M.; Hains, Doug; Whitley, L. Darrell
Ant Colony Optimization algorithms are population-based Stochastic Local Search algorithms that mimic the behavior of ants, simulating pheromone trails to search for solutions to combinatorial optimization problems. This paper introduces Population Variance, a novel approach to ACO algorithms that allows parameters to vary across the population over time, leading to solution construction differences that are not strictly stochastic. The increased exploration appears to help the search escape from local optima, significantly improving the robustness of the algorithm with respect to suboptimal parameter settings.
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Budde, M.E.; Tappan, G.; Rowland, J.; Lewis, J.; Tieszen, L.L.
2004-01-01
The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.
Birch, G F; Taylor, S E; Matthai, C
2001-01-01
comparatively large proportion of total variance associated with small-scale spatial and temporal variability in the field questions the often excessive cost and effort made in attempting minor reductions in analytical precision in contaminant investigations.
Hasenbusch, Martin
2016-03-01
The exchange or geometric cluster algorithm allows us to define a variance-reduced estimator of the connected two-point function in the presence of a broken Z(2)-symmetry. We present numerical tests for the improved Blume-Capel model on the simple-cubic lattice. We perform simulations for the critical isotherm, the low-temperature phase at vanishing external field, and, for comparison, also the high-temperature phase. For the connected two-point function, a substantial reduction of the variance can be obtained, allowing us to compute the correlation length ξ with high precision. Based on these results, estimates for various universal amplitude ratios that characterize the universality class of the three-dimensional Ising model are computed.
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
Worldwide variance in the potential utilization of Gamma Knife radiosurgery.
Hamilton, Travis; Dade Lunsford, L
2016-12-01
OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, Diego; de Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-09-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, D.; De Campos, C. P.; Petrelli, M.; Dingwell, D. B.
2015-12-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
VARIANCE ESTIMATION IN DOMAIN DECOMPOSED MONTE CARLO EIGENVALUE CALCULATIONS
Mervin, Brenden T; Maldonado, G. Ivan; Mosher, Scott W; Evans, Thomas M; Wagner, John C
2012-01-01
The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Variance of the Quantum Dwell Time for a Nonrelativistic Particle
NASA Technical Reports Server (NTRS)
Hahne, Gerhard
2012-01-01
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.
Hidden temporal order unveiled in stock market volatility variance
NASA Astrophysics Data System (ADS)
Shapira, Y.; Kenett, D. Y.; Raviv, Ohad; Ben-Jacob, E.
2011-06-01
When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Replica approach to mean-variance portfolio optimization
NASA Astrophysics Data System (ADS)
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T < 1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r = 1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1 - r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Discordance of DNA Methylation Variance Between two Accessible Human Tissues
Jiang, Ruiwei; Jones, Meaghan J.; Chen, Edith; Neumann, Sarah M.; Fraser, Hunter B.; Miller, Gregory E.; Kobor, Michael S.
2015-01-01
Population epigenetic studies have been seeking to identify differences in DNA methylation between specific exposures, demographic factors, or diseases in accessible tissues, but relatively little is known about how inter-individual variability differs between these tissues. This study presents an analysis of DNA methylation differences between matched peripheral blood mononuclear cells (PMBCs) and buccal epithelial cells (BECs), the two most accessible tissues for population studies, in 998 promoter-located CpG sites. Specifically we compared probe-wise DNA methylation variance, and how this variance related to demographic factors across the two tissues. PBMCs had overall higher DNA methylation than BECs, and the two tissues tended to differ most at genomic regions of low CpG density. Furthermore, although both tissues showed appreciable probe-wise variability, the specific regions and magnitude of variability differed strongly between tissues. Lastly, through exploratory association analysis, we found indication of differential association of BEC and PBMC with demographic variables. The work presented here offers insight into variability of DNA methylation between individuals and across tissues and helps guide decisions on the suitability of buccal epithelial or peripheral mononuclear cells for the biological questions explored by epigenetic studies in human populations. PMID:25660083
PET image reconstruction: mean, variance, and optimal minimax criterion
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex
2016-04-08
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance
Modality-Driven Classification and Visualization of Ensemble Variance
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10(16) electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-01-01
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed
29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the... chapter to determine the date that an issuance under this subpart was provided. (Approved by the Office...
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
Sources of variance of downwelling irradiance in water.
Gege, Peter; Pinnel, Nicole
2011-05-20
The downwelling irradiance in water is highly variable due to the focusing and defocusing of sunlight and skylight by the wave-modulated water surface. While the time scales and intensity variations caused by wave focusing are well studied, little is known about the induced spectral variability. Also, the impact of variations of sensor depth and inclination during the measurement on spectral irradiance has not been studied much. We have developed a model that relates the variance of spectral irradiance to the relevant parameters of the environmental and experimental conditions. A dataset from three German lakes was used to validate the model and to study the importance of each effect as a function of depth for the range of 0 to 5 m.
Analysis of variance of an underdetermined geodetic displacement problem
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
On computations of variance, covariance and correlation for interval data
NASA Astrophysics Data System (ADS)
Kishida, Masako
2017-02-01
In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.
Variance estimation for the Federal Waterfowl Harvest Surveys
Geissler, P.H.
1988-01-01
The Federal Waterfowl Harvest Surveys provide estimates of waterfowl harvest by species for flyways and states, harvests of most other migratory game bird species (by waterfowl hunters), crippling losses for ducks, geese, and coots, days hunted, and bag per hunter. The Waterfowl Hunter Questionnaire Survey separately estimates the harvest of ducks and geese using cluster samples of hunters who buy duck stamps at sample post offices. The Waterfowl Parts Collection estimates species, age, and sex ratios from parts solicited from successful hunters who responded to the Waterfowl Hunter Questionnaire Survey in previous years. These ratios are used to partition the duck and goose harvest into species, age, and sex specific harvest estimates. Annual estimates are correlated because successful hunters who respond to the Questionnaire Survey in one year may be asked to contribute to the Parts Collection for the next three years. Bootstrap variance estimates are used because covariances among years are difficult to estimate.
Correct use of repeated measures analysis of variance.
Park, Eunsik; Cho, Meehye; Ki, Chang-Seok
2009-02-01
In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).
Objective Bayesian Comparison of Constrained Analysis of Variance Models.
Consonni, Guido; Paroli, Roberta
2016-10-04
In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.
INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND
TenBarge, J. M.; Klein, K. G.; Howes, G. G.; Podesta, J. J.
2012-07-10
The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.
Estimation of measurement variance in the context of environment statistics
NASA Astrophysics Data System (ADS)
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
A method for the microlensed flux variance of QSOs
NASA Astrophysics Data System (ADS)
Goodman, Jeremy; Sun, Ai-Lei
2014-06-01
A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
From means and variances to persons and patterns
Grice, James W.
2015-01-01
A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672
Hodological Resonance, Hodological Variance, Psychosis, and Schizophrenia: A Hypothetical Model
Birkett, Paul Brian Lawrie
2011-01-01
Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network (“hodological resonance”) becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental) causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia. PMID:21811475
Variance of the quantum dwell time for a nonrelativistic particle
Hahne, G. E.
2013-01-15
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular, those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N= 1, 2, 3, Horizontal-Ellipsis , of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N= 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N= 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle's time flux and others) is derived.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Water vapor variance measurements using a Raman lidar
NASA Technical Reports Server (NTRS)
Evans, K.; Melfi, S. H.; Ferrare, R.; Whiteman, D.
1992-01-01
Because of the importance of atmospheric water vapor variance, we have analyzed data from the NASA/Goddard Raman lidar to obtain temporal scales of water vapor mixing ratio as a function of altitude over observation periods extending to 12 hours. The ground-based lidar measures water vapor mixing ration from near the earth's surface to an altitude of 9-10 km. Moisture profiles are acquired once every minute with 75 m vertical resolution. Data at each 75 meter altitude level can be displayed as a function of time from the beginning to the end of an observation period. These time sequences have been spectrally analyzed using a fast Fourier transform technique. An example of such a temporal spectrum obtained between 00:22 and 10:29 UT on December 6, 1991 is shown in the figure. The curve shown on the figure represents the spectral average of data from 11 height levels centered on an altitude of 1 km (1 plus or minus .375 km). The spectra shows a decrease in energy density with frequency which generally follows a -5/3 power law over the spectral interval 3x10 (exp -5) to 4x10 (exp -3) Hz. The flattening of the spectrum for frequencies greater than 6x10 (exp -3) Hz is most likely a measure of instrumental noise. Spectra like that shown in the figure are calculated for other altitudes and show changes in spectral features with height. Spectral analysis versus height have been performed for several observation periods which demonstrate changes in water vapor mixing ratio spectral character from one observation period to the next. The combination of these temporal spectra with independent measurements of winds aloft provide an opportunity to infer spatial scales of moisture variance.
Effective dimension reduction for sparse functional data.
Yao, F; Lei, E; Wu, Y
2015-06-01
We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method.
Effective dimension reduction for sparse functional data
YAO, F.; LEI, E.; WU, Y.
2015-01-01
Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293
Normand, Jacques; Li, Jih-Heng; Thomson, Nicholas; Jarlais, Don Des
2014-01-01
The “Harm Reduction” session was chaired by Dr. Jacques Normand, Director of the AIDS Research Program of the U.S. National Institute on Drug Abuse. The three presenters (and their presentation topics) were: Dr. Don Des Jarlais (High Coverage Needle/Syringe Programs for People Who Inject Drugs in Low and Middle Income Countries: A Systematic Review), Dr. Nicholas Thomson (Harm Reduction History, Response, and Current Trends in Asia), and Dr. Jih-Heng Li (Harm Reduction Strategies in Taiwan). PMID:25278732
Sensi, M; Morano, S; Morelli, S; Castaldo, P; Sagratella, E; De Rossi, M G; Andreani, D; Caltabiano, V; Vetri, M; Purrello, F; Di Mario, U
1998-09-01
Advanced glycation end-products (AGEs) are irreversible compounds which, by abnormally accumulating over proteins as a consequence of diabetic hyperglycaemia, can damage tissues and thus contribute to the pathogenesis of diabetic complications. This study was performed to evaluate whether restoration of euglycaemia by islet transplantation modifies AGE accumulation in central and peripheral nervous tissue proteins and, as a comparison, in proteins from a non-nervous tissue. Two groups of streptozotocin diabetic inbred Lewis rats with 4 (T1) or 8 (T2) months disease duration were grafted into the liver via the portal vein with 1200-1500 islets freshly isolated from normal Lewis rats. Transplanted rats, age-matched control and diabetic rats studied in parallel, were followed for a further 4-month period. At study conclusion, glycaemia, glycated haemoglobin and body weight were measured in all animals, and an oral glucose tolerance test (OGTT) performed in transplanted rats. AGE levels in cerebral cortex, spinal cord, sciatic nerve proteins and tail tendon collagen were measured by enzyme-linked immunosorbent assay (ELISA). Transplanted animal OGTTs were within normal limits, as were glycaemia and glycated haemoglobin. Diabetic animal AGEs were significantly higher than those of control animals. Protein AGE values were reduced in many transplanted animals compared to diabetic animals, reaching statistical significance in spinal cord (P < 0.05), sciatic nerve (P < 0.02) and tail tendon collagen (P < 0.05) of T1 animals. Thus, return to euglycaemia following islet transplantation after 4 months of diabetes with poor metabolic control reduces AGE accumulation rate in the protein fractions of the mixed and purely peripheral nervous tissues (spinal cord and sciatic nerve, respectively). However, after a double duration of bad metabolic control, a statistically significant AGE reduction has not been achieved in any of the tissues, suggesting the importance of an early
Not Available
1993-08-17
This report presents results from the third phase of an Innovative Clean Coal Technology (ICC-1) project demonstrating advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from a coal-fired boiler. The purpose of this project was to study the NO{sub x} emissions characteristics of ABB Combustion Engineering`s (ABB CE) Low NO{sub x} Concentric Firing System (LNCFS) Levels I, II, and III. These technologies were installed and tested in a stepwise fashion at Gulf Power Company`s Plant Lansing Smith Unit 2. The objective of this report is to provide the results from Phase III. During that phase, Levels I and III of the ABB C-E Services Low NO{sub x} Concentric Firing System were tested. The LNCFS Level III technology includes separated overfire air, close coupled overfire air, clustered coal nozzles, flame attachment coal nozzle tips, and concentric firing. The LNCFS Level I was simulated by closing the separated overfire air nozzles of the LNCFS Level III system. Based upon long-term data, LNCFS Level HI reduced NO{sub x} emissions by 45 percent at full load. LOI levels with LNCFS Level III increased slightly, however, tests showed that LOI levels with LNCFS Level III were highly dependent upon coal fineness. After correcting for leakage air through the separated overfire air system, the simulated LNCFS Level I reduced NO{sub x} emissions by 37 percent. There was no increase in LOI with LNCFS Level I.
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
Code of Federal Regulations, 2010 CFR
2010-07-01
... responsibility (primacy state) that issues variances shall require a community water system to install and/or use... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION... application by a system for a variance, the Administrator or primacy state that issues variances...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... Variances for Hazardous Selenium Bearing Waste AGENCY: Environmental Protection Agency (EPA). ACTION: Direct...-Bearing Waste II. Basis for This Determination III. Development of This Variance A. U.S. Ecology Nevada... from 0.16 mg/L to 5.7 mg/L TCLP. C. Site-Specific Treatment Variance for Selenium-Bearing Waste On...
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
ERIC Educational Resources Information Center
Bray, Marilyn; And Others
1996-01-01
Presents activities that focus on waste reduction in the school and community. The ideas are divided into grade level categories. Sample activities include Techno-Trash, where children use tools to take apart broken appliances or car parts, then reassemble them or build new creations. Activities are suggested for areas including language arts and…
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-12-07
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-01-01
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600
Cantele, Francesca; Lanzavecchia, Salvatore; Bellon, Pier Luigi
2004-11-01
VIVA is a software library that obtains low-resolution models of icosahedral viruses from projections observed at the electron microscope. VIVA works in a fully automatic way without any initial model. This feature eliminates the possibility of bias that could originate from the alignment of the projections to an external preliminary model. VIVA determines the viewing direction of the virus images by computation of sets of single particle reconstruction (SPR) followed by a variance analysis and classification of the 3D models. All structures are reduced in size to speed up computation. This limits the resolution of a VIVA reconstruction. The models obtained can be subsequently refined at best with use of standard libraries. Up today, VIVA has successfully solved the structure of all viruses tested, some of which being considered refractory particles. The VIVA library is written in 'C' language and is devised to run on widespread Linux computers.
Gallais, A
1992-01-01
For autotetraploid species the development of the concept of test value (value in testcross) leads to a simple description of the variance among testcross progenies. When defining directly genetic effects at the level of the value of the progenies, there is no contribution of triand tetragenic interactions. To estimate additive and dominance variances it is only necessary to have the population of progenies structured in half-sib or full-sib families; it is then possible to determine the presence of epistasis using a two-way mating design. When the theory of recurrent selection is applied dominance variance can be neglected for the prediction of genetic advance in one cycle as well for the development of combined selection when progenies are structured in families. The results are similar to those for diploids with two-locus epistasis. The more efficient scheme consists of the development of pair-crossing in off-season generations (for intercrossing) and simultaneous crossing of each plant to the tester. In comparison to the classical scheme, the relative efficiency of such a scheme is 41%. The use of combined selection will further increase this superiority.
Technology Transfer Automated Retrieval System (TEKTRAN)
UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), was used to identify sources of variance in 7 broccoli samples composed of two cultivars and seven different growing condition (four levels of Se irrigation, organic farming, and convention...
The association of Kienbock's disease and ulnar variance in the Iranian population.
Afshar, A; Aminzadeh-Gohari, A; Yekta, Z
2013-06-01
We retrospectively determined the distribution of ulnar variance in 60 patients with Kienböck's disease. We also measured the ulnar variances in 400 standard wrist radiographs in the normal adult population. The mean ulnar variance of the Kienböck's group was -1.1 mm (SD 1.7) and the mean ulnar variance of the general population was +0.7 (SD 1.5), which was significantly different. In the Kienböck's disease group there were 38 (63%) with ulnar negative, 16 (27%) neutral and six (10%) with ulnar positive variance. The preponderance of ulnar negative variance was statistically significant. There was an association between ulnar negative variance and the development of Kienböck's disease in this study.
Hakky, Tariq S.; Martinez, Daniel; Yang, Christopher; Carrion, Rafael E.
2015-01-01
Objective Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Introduction Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. Materials and Methods: We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. Results The patient tolerated the procedure well and has resolution of his corporal disfigurement. Conclusions Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement. PMID:26005988
Dziewinski, Jacek J.; Marczak, Stanislaw
2000-01-01
Nitrates are reduced to nitrogen gas by contacting the nitrates with a metal to reduce the nitrates to nitrites which are then contacted with an amide to produce nitrogen and carbon dioxide or acid anions which can be released to the atmosphere. Minor amounts of metal catalysts can be useful in the reduction of the nitrates to nitrites. Metal salts which are formed can be treated electrochemically to recover the metals.
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Cosmic variance and the measurement of the local Hubble parameter.
Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel
2013-06-14
There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary.
Linear constraint minimum variance beamformer functional magnetic resonance inverse imaging.
Lin, Fa-Hsuan; Witzel, Thomas; Zeffiro, Thomas A; Belliveau, John W
2008-11-01
Accurate estimation of the timing of neural activity is required to fully model the information flow among functionally specialized regions whose joint activity underlies perception, cognition and action. Attempts to detect the fine temporal structure of task-related activity would benefit from functional imaging methods allowing higher sampling rates. Spatial filtering techniques have been used in magnetoencephalography source imaging applications. In this work, we use the linear constraint minimal variance (LCMV) beamformer localization method to reconstruct single-shot volumetric functional magnetic resonance imaging (fMRI) data using signals acquired simultaneously from all channels of a high density radio-frequency (RF) coil array. The LCMV beamformer method generalizes the existing volumetric magnetic resonance inverse imaging (InI) technique, achieving higher detection sensitivity while maintaining whole-brain spatial coverage and 100 ms temporal resolution. In this paper, we begin by introducing the LCMV reconstruction formulation and then quantitatively assess its performance using both simulated and empirical data. To demonstrate the sensitivity and inter-subject reliability of volumetric LCMV InI, we employ an event-related design to probe the spatial and temporal properties of task-related hemodynamic signal modulations in primary visual cortex. Compared to minimum-norm estimate (MNE) reconstructions, LCMV offers better localization accuracy and superior detection sensitivity. Robust results from both single subject and group analyses demonstrate the excellent sensitivity and specificity of volumetric InI in detecting the spatial and temporal structure of task-related brain activity.
Analysis of variance (ANOVA) models in lower extremity wounds.
Reed, James F
2003-06-01
Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures.
Analysis of variance in neuroreceptor ligand imaging studies.
Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P
2011-01-01
Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.
Chromatic visualization of reflectivity variance within hybridized directional OCT images
NASA Astrophysics Data System (ADS)
Makhijani, Vikram S.; Roorda, Austin; Bayabo, Jan Kristine; Tong, Kevin K.; Rivera-Carpio, Carlos A.; Lujan, Brandon J.
2013-03-01
This study presents a new method of visualizing hybridized images of retinal spectral domain optical coherence tomography (SDOCT) data comprised of varied directional reflectivity. Due to the varying reflectivity of certain retinal structures relative to angle of incident light, SDOCT images obtained with differing entry positions result in nonequivalent images of corresponding cellular and extracellular structures, especially within layers containing photoreceptor components. Harnessing this property, cross-sectional pathologic and non-pathologic macular images were obtained from multiple pupil entry positions using commercially-available OCT systems, and custom segmentation, alignment, and hybridization algorithms were developed to chromatically visualize the composite variance of reflectivity effects. In these images, strong relative reflectivity from any given direction visualizes as relative intensity of its corresponding color channel. Evident in non-pathologic images was marked enhancement of Henle's fiber layer (HFL) visualization and varying reflectivity patterns of the inner limiting membrane (ILM) and photoreceptor inner/outer segment junctions (IS/OS). Pathologic images displayed similar and additional patterns. Such visualization may allow a more intuitive understanding of structural and physiologic processes in retinal pathologies.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
Anatomically constrained minimum variance beamforming applied to EEG.
Murzin, Vyacheslav; Fuchs, Armin; Kelso, J A Scott
2011-10-01
Neural activity as measured non-invasively using electroencephalography (EEG) or magnetoencephalography (MEG) originates in the cortical gray matter. In the cortex, pyramidal cells are organized in columns and activated coherently, leading to current flow perpendicular to the cortical surface. In recent years, beamforming algorithms have been developed, which use this property as an anatomical constraint for the locations and directions of potential sources in MEG data analysis. Here, we extend this work to EEG recordings, which require a more sophisticated forward model due to the blurring of the electric current at tissue boundaries where the conductivity changes. Using CT scans, we create a realistic three-layer head model consisting of tessellated surfaces that represent the cerebrospinal fluid-skull, skull-scalp, and scalp-air boundaries. The cortical gray matter surface, the anatomical constraint for the source dipoles, is extracted from MRI scans. EEG beamforming is implemented on simulated sets of EEG data for three different head models: single spherical, multi-shell spherical, and multi-shell realistic. Using the same conditions for simulated EEG and MEG data, it is shown (and quantified by receiver operating characteristic analysis) that EEG beamforming detects radially oriented sources, to which MEG lacks sensitivity. By merging several techniques, such as linearly constrained minimum variance beamforming, realistic geometry forward solutions, and cortical constraints, we demonstrate it is possible to localize and estimate the dynamics of dipolar and spatially extended (distributed) sources of neural activity.
Osteotomy for Sigmoid Notch Obliquity and Ulnar Positive Variance
Dickson, Lisa M.; Tham, Stephen K. Y.
2014-01-01
Background Several causes of ulnar wrist pain have been described. One uncommon cause is ulnar carpal abutment associated with a notable distally facing sigmoid notch (reverse obliquity). Such an abnormality cannot be treated with ulnar shortening alone because it will result in incongruity of the distal radioulnar joint (DRUJ). Case Description A 23-year-old woman presented with ulnar wrist pain aggravated by forearm rotation. Ten years earlier she had sustained a distal radius fracture that was conservatively treated. Examination revealed mild tenderness at the DRUJ and decreased wrist flexion and grip strength on the affected side. Radiographic examination demonstrated 1 cm ulnar positive variance, ulnar styloid nonunion, and a 37° reverse obliquity of the sigmoid notch. The patient was treated with ulnar shortening and rotation sigmoid notch osteotomy to realign the sigmoid notch with the ulnar head. Literature Review Sigmoid notch incongruity is one of several causes of wrist pain after distal radius fracture. Traditional salvage options for DRUJ arthritis may result in loss of grip strength, painful ulnar shaft instability, or reossification and are not acceptable options in the young patient. Sigmoid notch osteotomy or osteoplasty have been described to correct the shape of the sigmoid notch in the axial plane. Clinical Relevance We report a coronal plane osteotomy of the sigmoid notch to treat reverse obliquity of the sigmoid notch associated with ulnar carpal abutment. The rotation osteotomy described is particularly useful for patients in whom a salvage procedure is not warranted. PMID:24533247
Cost/variance optimization for human exposure assessment studies.
Whitmore, Roy W; Pellizzari, Edo D; Zelon, Harvey S; Michael, Larry C; Quackenboss, James J
2005-11-01
The National Human Exposure Assessment Survey (NHEXAS) field study in EPA Region V (one of three NHEXAS field studies) provides extensive exposure data on a representative sample of 249 residents of the Great Lakes states. Concentration data were obtained for both metals and volatile organic compounds (VOCs) from multiple environmental media and from human biomarkers. A variance model for the logarithms of concentration measurements is used to define intraclass correlations between observations within primary sampling units (PSUs) (nominally counties) and within secondary sampling units (SSUs) (nominally Census blocks). A model for the total cost of the study is developed in terms of fixed costs and variable costs per PSU, SSU, and participant. Intraclass correlations are estimated for media and analytes with sufficient sample sizes. We demonstrate how the intraclass correlations and variable cost components can be used to determine the sample allocation that minimizes cost while achieving pre-specified precision constraints for future studies that monitor environmental concentrations and human exposures for metals and VOCs.
Neutrality and the Response of Rare Species to Environmental Variance
Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena; Bulleri, Fabio
2008-01-01
Neutral models and differential responses of species to environmental heterogeneity offer complementary explanations of species abundance distribution and dynamics. Under what circumstances one model prevails over the other is still a matter of debate. We show that the decay of similarity over time in rocky seashore assemblages of algae and invertebrates sampled over a period of 16 years was consistent with the predictions of a stochastic model of ecological drift at time scales larger than 2 years, but not at time scales between 3 and 24 months when similarity was quantified with an index that reflected changes in abundance of rare species. A field experiment was performed to examine whether assemblages responded neutrally or non-neutrally to changes in temporal variance of disturbance. The experimental results did not reject neutrality, but identified a positive effect of intermediate levels of environmental heterogeneity on the abundance of rare species. This effect translated into a marked decrease in the characteristic time scale of species turnover, highlighting the role of rare species in driving assemblage dynamics in fluctuating environments. PMID:18648545
Minding Impacting Events in a Model of Stochastic Variance
Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.
2011-01-01
We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864
Lung vasculature imaging using speckle variance optical coherence tomography
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Anthony M. D.; Lane, Pierre M.; McWilliams, Annette; Shaipanich, Tawimas; MacAulay, Calum E.; Yang, Victor X. D.; Lam, Stephen
2012-02-01
Architectural changes in and remodeling of the bronchial and pulmonary vasculature are important pathways in diseases such as asthma, chronic obstructive pulmonary disease (COPD), and lung cancer. However, there is a lack of methods that can find and examine small bronchial vasculature in vivo. Structural lung airway imaging using optical coherence tomography (OCT) has previously been shown to be of great utility in examining bronchial lesions during lung cancer screening under the guidance of autofluorescence bronchoscopy. Using a fiber optic endoscopic OCT probe, we acquire OCT images from in vivo human subjects. The side-looking, circumferentially-scanning probe is inserted down the instrument channel of a standard bronchoscope and manually guided to the imaging location. Multiple images are collected with the probe spinning proximally at 100Hz. Due to friction, the distal end of the probe does not spin perfectly synchronous with the proximal end, resulting in non-uniform rotational distortion (NURD) of the images. First, we apply a correction algorithm to remove NURD. We then use a speckle variance algorithm to identify vasculature. The initial data show a vascaulture density in small human airways similar to what would be expected.
Fan Noise Reduction: An Overview
NASA Technical Reports Server (NTRS)
Envia, Edmane
2001-01-01
Fan noise reduction technologies developed as part of the engine noise reduction element of the Advanced Subsonic Technology Program are reviewed. Developments in low-noise fan stage design, swept and leaned outlet guide vanes, active noise control, fan flow management, and scarfed inlet are discussed. In each case, a description of the method is presented and, where available, representative results and general conclusions are discussed. The review concludes with a summary of the accomplishments of the AST-sponsored fan noise reduction research and a few thoughts on future work.
Genetic Variance for Body Size in a Natural Population of Drosophila Buzzatii
Ruiz, A.; Santos, M.; Barbadilla, A.; Quezada-Diaz, J. E.; Hasson, E.; Fontdevila, A.
1991-01-01
Previous work has shown thorax length to be under directional selection in the Drosophila buzzatii population of Carboneras. In order to predict the genetic consequences of natural selection, genetic variation for this trait was investigated in two ways. First, narrow sense heritability was estimated in the laboratory F(2) generation of a sample of wild flies by means of the offspring-parent regression. A relatively high value, 0.59, was obtained. Because the phenotypic variance of wild flies was 7-9 times that of the flies raised in the laboratory, ``natural'' heritability may be estimated as one-seventh to one-ninth that value. Second, the contribution of the second and fourth chromosomes, which are polymorphic for paracentric inversions, to the genetic variance of thorax length was estimated in the field and in the laboratory. This was done with the assistance of a simple genetic model which shows that the variance among chromosome arrangements and the variance among karyotypes provide minimum estimates of the chromosome's contribution to the additive and genetic variances of the triat, respectively. In males raised under optimal conditions in the laboratory, the variance among second-chromosome karyotypes accounted for 11.43% of the total phenotypic variance and most of this variance was additive; by contrast, the contribution of the fourth chromosome was nonsignificant. The variance among second-chromosome karyotypes accounted for 1.56-1.78% of the total phenotypic variance in wild males and was nonsignificant in wild females. The variance among fourth chromosome karyotypes accounted for 0.14-3.48% of the total phenotypic variance in wild flies. At both chromosomes, the proportion of additive variance was higher in mating flies than in nonmating flies. PMID:1916242
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Low complex subspace minimum variance beamformer for medical ultrasound imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2016-03-01
Minimum variance (MV) beamformer enhances the resolution and contrast in the medical ultrasound imaging at the expense of higher computational complexity with respect to the non-adaptive delay-and-sum beamformer. The major complexity arises from the estimation of the L×L array covariance matrix using spatial averaging, which is required to more accurate estimation of the covariance matrix of correlated signals, and inversion of it, which is required for calculating the MV weight vector which are as high as O(L(2)) and O(L(3)), respectively. Reducing the number of array elements decreases the computational complexity but degrades the imaging resolution. In this paper, we propose a subspace MV beamformer which preserves the advantages of the MV beamformer with lower complexity. The subspace MV neglects some rows of the array covariance matrix instead of reducing the array size. If we keep η rows of the array covariance matrix which leads to a thin non-square matrix, the weight vector of the subspace beamformer can be achieved in the same way as the MV obtains its weight vector with lower complexity as high as O(η(2)L). More calculations would be saved because an η×L covariance matrix must be estimated instead of a L×L. We simulated a wire targets phantom and a cyst phantom to evaluate the performance of the proposed beamformer. The results indicate that we can keep about 16 from 43 rows of the array covariance matrix which reduces the order of complexity to 14% while the image resolution is still comparable to that of the standard MV beamformer. We also applied the proposed method to an experimental RF data and showed that the subspace MV beamformer performs like the standard MV with lower computational complexity.
Advanced program weight control system
NASA Technical Reports Server (NTRS)
Derwa, G. T.
1978-01-01
The design and implementation of the Advanced Program Weight Control System (APWCS) are reported. The APWCS system allows the coordination of vehicle weight reduction programs well in advance so as to meet mandated requirements of fuel economy imposed by government and to achieve corporate targets of vehicle weights. The system is being used by multiple engineering offices to track weight reduction from inception to eventual production. The projected annualized savings due to the APWCS system is over $2.5 million.