Science.gov

Sample records for advanced variance reduction

  1. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    SciTech Connect

    Chiba, G. Tsuji, M.; Narabayashi, T.

    2015-01-15

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  2. Hybrid biasing approaches for global variance reduction.

    PubMed

    Wu, Zeyun; Abdel-Khalik, Hany S

    2013-02-01

    A new variant of Monte Carlo-deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses.

  3. Variance Reduction Using Nonreversible Langevin Samplers.

    PubMed

    Duncan, A B; Lelièvre, T; Pavliotis, G A

    A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers [30, 37, 61, 72], introducing an appropriately chosen nonreversible component to the dynamics is beneficial, both in terms of reducing the asymptotic variance and of speeding up convergence to the target distribution. In this paper we present a detailed study of the dependence of the asymptotic variance on the deviation from reversibility. Our theoretical findings are supported by numerical simulations.

  4. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.

  5. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  6. Automated variance reduction for Monte Carlo shielding analyses with MCNP

    NASA Astrophysics Data System (ADS)

    Radulescu, Georgeta

    Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.

  7. Variance reduction methods applied to deep-penetration problems

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.

  8. Methods for variance reduction in Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.

  9. Fringe biasing: A variance reduction technique for optically thick meshes

    SciTech Connect

    Smedley-Stevenson, R. P.

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  10. Monte Carlo calculation of specific absorbed fractions: variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.

    2015-04-01

    The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.

  11. Improving computational efficiency of Monte Carlo simulations with variance reduction

    SciTech Connect

    Turner, A.

    2013-07-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  12. Delivery Time Variance Reduction in the Military Supply Chain

    DTIC Science & Technology

    2010-03-01

    greatest amount of delivery time variance. A simulation is developed using ARENA that models cargo shipments into aerial ports in Afghanistan. Design...experiments and a simulation optimizer, OptQuest, are used to determine most effective methods of reducing delivery time variance at individual aerial...ports in Afghanistan as well as the system as a whole. The results indicate that adjustments in port hold times can decrease the overall delivery time variance in the system.

  13. Replicative Use of an External Model in Simulation Variance Reduction

    DTIC Science & Technology

    1996-03-01

    measures used are confidence interval width reduction, realized coverage, and estimated Mean Square Error. Results of this study indicate analytical...control variates achieve comparable confidence interval width reduction with internal and external control variates. However, the analytical control

  14. An Investigation of Nonlinear Controls and Regression-Adjusted Estimators for Variance Reduction in Computer Simulation

    DTIC Science & Technology

    1991-03-01

    Adjusted Estimators for Variance 1Redutilol in Computer Simutlation by Riichiardl L. R’ r March, 1991 D~issertation Advisor: Peter A.W. Lewis Approved for...OF NONLINEAR CONTROLS AND REGRESSION-ADJUSTED ESTIMATORS FOR VARIANCE REDUCTION IN COMPUTER SIMULATION 12. Personal Author(s) Richard L. Ressler 13a...necessary and identify by block number) This dissertation develops new techniques for variance reduction in computer simulation. It demonstrates that

  15. Optimization under uncertainty: Adaptive variance reduction, adaptive metamodeling, and investigation of robustness measures

    NASA Astrophysics Data System (ADS)

    Medina, Juan Camilo

    This dissertation offers computational and theoretical advances for optimization under uncertainty problems that utilize a probabilistic framework for addressing such uncertainties, and adopt a probabilistic performance as objective function. Emphasis is placed on applications that involve potentially complex numerical and probability models. A generalized approach is adopted, treating the system model as a "black-box" and relying on stochastic simulation for evaluating the probabilistic performance. This approach can impose, though, an elevated computational cost, and two of the advances offered in this dissertation aim at decreasing the computational burden associated with stochastic simulation when integrated with optimization applications. The first one develops an adaptive implementation of importance sampling (a popular variance reduction technique) by sharing information across the iterations of the numerical optimization algorithm. The system model evaluations from the current iteration are utilized to formulate importance sampling densities for subsequent iterations with only a small additional computational effort. The characteristics of these densities as well as the specific model parameters these densities span are explicitly optimized. The second advancement focuses on adaptive tuning of a kriging metamodel to replace the computationally intensive system model. A novel implementation is considered, establishing a metamodel with respect to both the uncertain model parameters as well as the design variables, offering significant computational savings. Additionally, the adaptive selection of certain characteristics of the metamodel, such as support points or order of basis functions, is considered by utilizing readily available information from the previous iteration of the optimization algorithm. The third advancement extends to a different application and considers the assessment of the appropriateness of different candidate robust designs. A novel

  16. Reduction of variance in measurements of average metabolite concentration in anatomically-defined brain regions

    NASA Astrophysics Data System (ADS)

    Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki

    2016-11-01

    Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.

  17. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE PAGES

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    2017-04-06

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  18. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    SciTech Connect

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.

  19. Discrete velocity computations with stochastic variance reduction of the Boltzmann equation for gas mixtures

    SciTech Connect

    Clarke, Peter; Varghese, Philip; Goldstein, David

    2014-12-09

    We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.

  20. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    SciTech Connect

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C; Murphy, Brian D; Mueller, Don

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.

  1. Variance reduction technique in a beta radiation beam using an extrapolation chamber.

    PubMed

    Polo, Ivón Oramas; Souza Santos, William; de Lara Antonio, Patrícia; Caldas, Linda V E

    2017-10-01

    This paper aims to show how the variance reduction technique "Geometry splitting/Russian roulette" improves the statistical error and reduces uncertainties in the determination of the absorbed dose rate in tissue using an extrapolation chamber for beta radiation. The results show that the use of this technique can increase the number of events in the chamber cavity leading to a closer approximation of simulation result with the physical problem. There was a good agreement among the experimental measurements, the certificate of manufacture and the simulation results of the absorbed dose rate values and uncertainties. The absorbed dose rate variation coefficient using the variance reduction technique "Geometry splitting/Russian roulette" was 2.85%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. ADVANTG 3.0.1: AutomateD VAriaNce reducTion Generator

    SciTech Connect

    2015-08-17

    Version 00 ADVANTG is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 V1.60 (CCC-810, not included in this distribution) based on approximate 3-D multigroup discrete ordinates adjoint transport solutions generated by Denovo (included in this distribution). The variance reduction parameters generated by ADVANTG consist of space and energy-dependent weight-window bounds and biased source distributions, which are output in formats that can be directly used with unmodified versions of MCNP5. ADVANTG has been applied to neutron, photon, and coupled neutron-photon simulations of real-world radiation detection and shielding scenarios. ADVANTG is compatible with all MCNP5 geometry features and can be used to accelerate cell tallies (F4, F6, F8), surface tallies (F1 and F2), point-detector tallies (F5), and Cartesian mesh tallies (FMESH).

  3. Estimating thermodynamic expectations and free energies in expanded ensemble simulations: Systematic variance reduction through conditioning

    NASA Astrophysics Data System (ADS)

    Athènes, Manuel; Terrier, Pierre

    2017-05-01

    Markov chain Monte Carlo methods are primarily used for sampling from a given probability distribution and estimating multi-dimensional integrals based on the information contained in the generated samples. Whenever it is possible, more accurate estimates are obtained by combining Monte Carlo integration and integration by numerical quadrature along particular coordinates. We show that this variance reduction technique, referred to as conditioning in probability theory, can be advantageously implemented in expanded ensemble simulations. These simulations aim at estimating thermodynamic expectations as a function of an external parameter that is sampled like an additional coordinate. Conditioning therein entails integrating along the external coordinate by numerical quadrature. We prove variance reduction with respect to alternative standard estimators and demonstrate the practical efficiency of the technique by estimating free energies and characterizing a structural phase transition between two solid phases.

  4. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  5. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Vidal-Codina, F.; Nguyen, N. C.; Giles, M. B.; Peraire, J.

    2015-09-01

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  6. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    SciTech Connect

    Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.

    2015-09-15

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  7. Importance sampling variance reduction for the Fokker-Planck rarefied gas particle method

    NASA Astrophysics Data System (ADS)

    Collyer, B. S.; Connaughton, C.; Lockerby, D. A.

    2016-11-01

    The Fokker-Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find that our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.

  8. Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method

    SciTech Connect

    Collyer, B.S.; Connaughton, C.; Lockerby, D.A.

    2016-11-15

    The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find that our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.

  9. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    SciTech Connect

    Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick

    2015-08-15

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.

  10. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2007-09-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the "hot" regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  11. MCNPX--PoliMi Variance Reduction Techniques for Simulating Neutron Scintillation Detector Response

    NASA Astrophysics Data System (ADS)

    Prasad, Shikha

    Scintillation detectors have emerged as a viable He-3 replacement technology in the field of nuclear nonproliferation and safeguards. The scintillation light produced in the detectors is dependent on the energy deposited and the nucleus with which the interaction occurs. For neutrons interacting with hydrogen in organic liquid scintillation detectors, the energy-to-light conversion process is nonlinear. MCNPX-PoliMi is a Monte Carlo Code that has been used for simulating this detailed scintillation physics; however, until now, simulations have only been done in analog mode. Analog Monte Carlo simulations can take long times to run, especially in the presence of shielding and large source-detector distances, as in the case of typical nonproliferation problems. In this thesis, two nonanalog approaches to speed up MCNPX-PoliMi simulations of neutron scintillation detector response have been studied. In the first approach, a response matrix method (RMM) is used to efficiently calculate neutron pulse height distributions (PHDs). This method combines the neutron current incident on the detector face with an MCNPX-PoliMi-calculated response matrix to generate PHDs. The PHD calculations and their associated uncertainty are compared for a polyethylene-shielded and lead-shielded Cf-252 source for three different techniques: fully analog MCNPX-PoliMi, the RMM, and the RMM with source biasing. The RMM with source biasing reduces computation time or increases the figure-of-merit on an average by a factor of 600 for polyethylene and 300 for lead shielding (when compared to the fully analog calculation). The simulated neutron PHDs show good agreement with the laboratory measurements, thereby validating the RMM. In the second approach, MCNPX-PoliMi simulations are performed with the aid of variance reduction techniques. This is done by separating the analog and nonanalog components of the simulations. Inside the detector region, where scintillation light is produced, no variance

  12. Application of fuzzy sets to estimate cost savings due to variance reduction

    NASA Astrophysics Data System (ADS)

    Munoz, Jairo; Ostwald, Phillip F.

    1993-12-01

    One common assumption of models to evaluate the cost of variation is that the quality characteristic can be approximated by a standard normal distribution. Such an assumption is invalid for three important cases: (a) when the random variable is always positive, (b) when manual intervention distorts random variation, and (c) when the variable of interest is evaluated by linguistic terms. This paper applies the Weibull distribution to address nonnormal situations and fuzzy logic theory to study the case of quality evaluated via lexical terms. The approach concentrates on the cost incurred by inspection to formulate a probabilistic-possibilistic model that determines cost savings due to variance reduction. The model is tested with actual data from a manual TIG welding process.

  13. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs

    NASA Astrophysics Data System (ADS)

    Rodriguez, M.; Sempau, J.; Brualla, L.

    2012-05-01

    A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.

  14. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs.

    PubMed

    Rodriguez, M; Sempau, J; Brualla, L

    2012-05-21

    A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as 'splitting-roulette' was implemented on the Monte Carlo code [Formula: see text] and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and 'selective splitting'. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code [Formula: see text]. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.

  15. Efficient Monte Carlo simulation of multileaf collimators using geometry-related variance-reduction techniques.

    PubMed

    Brualla, L; Salvat, F; Palanco-Zamora, R

    2009-07-07

    A technique for accelerating the simulation of multileaf collimators with Monte Carlo methods is presented. This technique, which will be referred to as the movable-skin method, is based on geometrical modifications that do not alter the physical shape of the leaves, but that affect the logical way in which the Monte Carlo code processes the geometry. Zones of the geometry from which secondary radiation can emerge are defined as skins and the radiation transport throughout these zones is simulated accurately, while transport in non-skin zones is modelled approximately. The skins method is general and can be applied to most of the radiation transport Monte Carlo codes used in radiotherapy. The code AUTOLINAC for the automatic generation of the geometry file and the physical parameters required in a simulation of a linac with the Monte Carlo code PENELOPE is also introduced. This code has a modularized library of all Varian Clinac machines with their multileaf collimators and electron applicators. AUTOLINAC automatically determines the position of skins and the parameter values employed for other variance-reduction techniques that are adequate for the simulation of a linac. Using the adaptive variance-reduction techniques presented here it is possible to simulate with PENELOPE an entire linac with a fully closed multileaf collimator in two hours. For this benchmark a single core of a 2.8 GHz processor was used and 2% statistical uncertainty (1sigma) of the absorbed dose in water was reached with a voxel size of 2 x 2 x 2 mm(3). Several configurations of the multileaf collimator were simulated and the results were found to be in excellent agreement with experimental measurements.

  16. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    NASA Astrophysics Data System (ADS)

    Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul

    2017-05-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.

  17. Attention-Induced Variance and Noise Correlation Reduction in Macaque V1 Is Mediated by NMDA Receptors

    PubMed Central

    Herrero, Jose L.; Gieselmann, Marc A.; Sanayei, Mehdi; Thiele, Alexander

    2013-01-01

    Summary Attention improves perception by affecting different aspects of the neuronal code. It enhances firing rates, it reduces firing rate variability and noise correlations of neurons, and it alters the strength of oscillatory activity. Attention-induced rate enhancement in striate cortex requires cholinergic mechanisms. The neuropharmacological mechanisms responsible for attention-induced variance and noise correlation reduction or those supporting changes in oscillatory activity are unknown. We show that ionotropic glutamatergic receptor activation is required for attention-induced rate variance, noise correlation, and LFP gamma power reduction in macaque V1, but not for attention-induced rate modulations. NMDA receptors mediate attention-induced variance reduction and attention-induced noise correlation reduction. Our results demonstrate that attention improves sensory processing by a variety of mechanisms that are dissociable at the receptor level. PMID:23719166

  18. Hybrid mesh generation using advancing reduction technique

    USDA-ARS?s Scientific Manuscript database

    This study presents an extension of the application of the advancing reduction technique to the hybrid mesh generation. The proposed algorithm is based on a pre-generated rectangle mesh (RM) with a certain orientation. The intersection points between the two sets of perpendicular mesh lines in RM an...

  19. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems

    SciTech Connect

    Somasundaram, E.; Palmer, T. S.

    2013-07-01

    In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)

  20. Variance reduction in randomised trials by inverse probability weighting using the propensity score

    PubMed Central

    Williamson, Elizabeth J; Forbes, Andrew; White, Ian R

    2014-01-01

    In individually randomised controlled trials, adjustment for baseline characteristics is often undertaken to increase precision of the treatment effect estimate. This is usually performed using covariate adjustment in outcome regression models. An alternative method of adjustment is to use inverse probability-of-treatment weighting (IPTW), on the basis of estimated propensity scores. We calculate the large-sample marginal variance of IPTW estimators of the mean difference for continuous outcomes, and risk difference, risk ratio or odds ratio for binary outcomes. We show that IPTW adjustment always increases the precision of the treatment effect estimate. For continuous outcomes, we demonstrate that the IPTW estimator has the same large-sample marginal variance as the standard analysis of covariance estimator. However, ignoring the estimation of the propensity score in the calculation of the variance leads to the erroneous conclusion that the IPTW treatment effect estimator has the same variance as an unadjusted estimator; thus, it is important to use a variance estimator that correctly takes into account the estimation of the propensity score. The IPTW approach has particular advantages when estimating risk differences or risk ratios. In this case, non-convergence of covariate-adjusted outcome regression models frequently occurs. Such problems can be circumvented by using the IPTW adjustment approach. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24114884

  1. Variance reduction through robust design of boundary conditions for stochastic hyperbolic systems of equations

    SciTech Connect

    Nordström, Jan Wahlsten, Markus

    2015-02-01

    We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for the Euler equations.

  2. Flagged uniform particle splitting for variance reduction in proton and carbon ion track-structure simulations

    NASA Astrophysics Data System (ADS)

    Ramos-Méndez, José; Schuemann, Jan; Incerti, Sebastien; Paganetti, Harald; Schulte, Reinhard; Faddegon, Bruce

    2017-08-01

    deviations) for endpoints (1) and (2), within 2% (1 standard deviation) for endpoint (3). In conclusion, standard particle splitting variance reduction techniques can be successfully implemented in Monte Carlo track structure codes.

  3. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  4. Teacher Variance Inventory-IV: Psychometric Properties and Advanced Applications for Use in Consultation.

    ERIC Educational Resources Information Center

    Winchell, Kristina; Hyman, Irwin

    This paper describes the development of the fourth and latest version of the Teacher Variance Inventory-IV (TVI-IV). It was designed to improve the psychometric properties of the TVI and explore other characteristics that enable the TVI to be used for teacher consultation. The TVI-IV is a self-report measure based on Teacher Variance theory, a…

  5. Advanced CO2 Removal and Reduction System

    NASA Technical Reports Server (NTRS)

    Alptekin, Gokhan; Dubovik, Margarita; Copeland, Robert J.

    2011-01-01

    An advanced system for removing CO2 and H2O from cabin air, reducing the CO2, and returning the resulting O2 to the air is less massive than is a prior system that includes two assemblies . one for removal and one for reduction. Also, in this system, unlike in the prior system, there is no need to compress and temporarily store CO2. In this present system, removal and reduction take place within a single assembly, wherein removal is effected by use of an alkali sorbent and reduction is effected using a supply of H2 and Ru catalyst, by means of the Sabatier reaction, which is CO2 + 4H2 CH4 + O2. The assembly contains two fixed-bed reactors operating in alternation: At first, air is blown through the first bed, which absorbs CO2 and H2O. Once the first bed is saturated with CO2 and H2O, the flow of air is diverted through the second bed and the first bed is regenerated by supplying it with H2 for the Sabatier reaction. Initially, the H2 is heated to provide heat for the regeneration reaction, which is endothermic. In the later stages of regeneration, the Sabatier reaction, which is exothermic, supplies the heat for regeneration.

  6. Bias and variance reduction in estimating the proportion of true-null hypotheses

    PubMed Central

    Cheng, Yebin; Gao, Dexiang; Tong, Tiejun

    2015-01-01

    When testing a large number of hypotheses, estimating the proportion of true nulls, denoted by \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document}, becomes increasingly important. This quantity has many applications in practice. For instance, a reliable estimate of \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} can eliminate the conservative bias of the Benjamini–Hochberg procedure on controlling the false discovery rate. It is known that most methods in the literature for estimating \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} are conservative. Recently, some attempts have been paid to reduce such estimation bias. Nevertheless, they are either over bias corrected or suffering from an unacceptably large estimation variance. In this paper, we propose a new method for estimating \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} that aims to reduce the bias and variance of the estimation simultaneously. To achieve this, we first utilize the probability density functions of false-null \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage

  7. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    NASA Astrophysics Data System (ADS)

    Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa

    2014-07-01

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  8. VR-BFDT: A variance reduction based binary fuzzy decision tree induction method for protein function prediction.

    PubMed

    Golzari, Fahimeh; Jalili, Saeed

    2015-07-21

    In protein function prediction (PFP) problem, the goal is to predict function of numerous well-sequenced known proteins whose function is not still known precisely. PFP is one of the special and complex problems in machine learning domain in which a protein (regarded as instance) may have more than one function simultaneously. Furthermore, the functions (regarded as classes) are dependent and also are organized in a hierarchical structure in the form of a tree or directed acyclic graph. One of the common learning methods proposed for solving this problem is decision trees in which, by partitioning data into sharp boundaries sets, small changes in the attribute values of a new instance may cause incorrect change in predicted label of the instance and finally misclassification. In this paper, a Variance Reduction based Binary Fuzzy Decision Tree (VR-BFDT) algorithm is proposed to predict functions of the proteins. This algorithm just fuzzifies the decision boundaries instead of converting the numeric attributes into fuzzy linguistic terms. It has the ability of assigning multiple functions to each protein simultaneously and preserves the hierarchy consistency between functional classes. It uses the label variance reduction as splitting criterion to select the best "attribute-value" at each node of the decision tree. The experimental results show that the overall performance of the proposed algorithm is promising.

  9. Fast patient-specific Monte Carlo brachytherapy dose calculations via the correlated sampling variance reduction technique

    PubMed Central

    Sampson, Andrew; Le, Yi; Williamson, Jeffrey F.

    2012-01-01

    . On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 × 1 × 1 mm3) and breast (0.67 × 0.67 × 0.8 mm3) CTVs, respectively. Conclusions: CMC supports an additional average 38–60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study. PMID:22320816

  10. Advancing Greenhouse Gas Reductions through Affordable Housing

    EPA Pesticide Factsheets

    James City County, Virginia, is an EPA Climate Showcase Community. EPA’s Climate Showcase Communities Program helps local governments and tribal nations pilot innovative, cost-effective and replicable community-based greenhouse gas reduction projects.

  11. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators of use in cancer therapy

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2010-01-01

    The Monte Carlo simulation of clinical electron linear accelerators requires large computation times to achieve the level of uncertainty required for radiotherapy. In this context, variance reduction techniques play a fundamental role in the reduction of this computational time. Here we describe the use of the ant colony method to control the application of two variance reduction techniques: Splitting and Russian roulette. The approach can be applied to any accelerator in a straightforward way and permits the increasing of the efficiency of the simulation by a factor larger than 50.

  12. Accelerating Monte Carlo image reconstruction of a PMMA phantom through variance reduction techniques for quality control in digital mammography.

    PubMed

    Ramos, M; Ferrer, S; Verdu, G

    2005-01-01

    Mammography is a non-invasive technique used for the detection of breast lesions. The use of this technique in a breast screening program requires a continuous quality control testing in mammography units for ensuring a minimum absorbed glandular dose without modifying image quality. Digital mammography has been progressively introduced in screening centers, since recent evolution of photostimulable phosphor detectors. The aim of this work is the validation of a methodology for reconstructing digital images of a polymethyl-methacrylate (PMMA) phantom (P01 model) under pure Monte Carlo techniques. A reference image has been acquired for this phantom under automatic exposure control (AEC) mode (28 kV and 14 mAs). Some variance reduction techniques (VRT) have been applied to improve the efficiency of the simulations, defined as the number of particles reaching the imaging system per starting particle. All images have been used and stored in DICOM format. The results prove that the signal-to-noise ratio (SNR) of the reconstructed images have been increased with the use of the VRT, showing similar values between different employed tallies. As a conclusion, these images could be used during quality control testing for showing any deviation of the exposition parameters from the desired reference level.

  13. Evaluation of the Advanced Subsonic Technology Program Noise Reduction Benefits

    NASA Technical Reports Server (NTRS)

    Golub, Robert A.; Rawls, John W., Jr.; Russell, James W.

    2005-01-01

    This report presents a detailed evaluation of the aircraft noise reduction technology concepts developed during the course of the NASA/FAA Advanced Subsonic Technology (AST) Noise Reduction Program. In 1992, NASA and the FAA initiated a cosponsored, multi-year program with the U.S. aircraft industry focused on achieving significant advances in aircraft noise reduction. The program achieved success through a systematic development and validation of noise reduction technology. Using the NASA Aircraft Noise Prediction Program, the noise reduction benefit of the technologies that reached a NASA technology readiness level of 5 or 6 were applied to each of four classes of aircraft which included a large four engine aircraft, a large twin engine aircraft, a small twin engine aircraft and a business jet. Total aircraft noise reductions resulting from the implementation of the appropriate technologies for each class of aircraft are presented and compared to the AST program goals.

  14. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

  15. Analysis of latent variance reduction methods in phase space Monte Carlo calculations for 6, 10 and 18 MV photons by using MCNP code

    NASA Astrophysics Data System (ADS)

    Ezzati, A. O.; Sohrabpour, M.

    2013-02-01

    In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000.

  16. Coefficient of Variance as Quality Criterion for Evaluation of Advanced Hepatic Fibrosis Using 2D Shear-Wave Elastography.

    PubMed

    Lim, Sanghyeok; Kim, Seung Hyun; Kim, Yongsoo; Cho, Young Seo; Kim, Tae Yeob; Jeong, Woo Kyoung; Sohn, Joo Hyun

    2017-08-14

    To compare the diagnostic performance for advanced hepatic fibrosis measured by 2D shear-wave elastography (SWE), using either the coefficient of variance (CV) or the interquartile range divided by the median value (IQR/M) as quality criteria. In this retrospective study, from January 2011 to December 2013, 96 patients, who underwent both liver stiffness measurement by 2D SWE and liver biopsy for hepatic fibrosis grading, were enrolled. The diagnostic performances of the CV and the IQR/M were analyzed using receiver operating characteristic curves with areas under the curves (AUCs) and were compared by Fisher's Z test, based on matching the cutoff points in an interactive dot diagram. All P values less than 0.05 were considered significant. When using the cutoff value IQR/M of 0.21, the matched cutoff point of CV was 20%. When a cutoff value of CV of 20% was used, the diagnostic performance for advanced hepatic fibrosis ( ≥ F3 grade) with CV of less than 20% was better than that in the group with CV greater than or equal to 20% (AUC 0.967 versus 0.786, z statistic = 2.23, P = .025), whereas when the matched cutoff value IQR/M of 0.21 showed no difference (AUC 0.918 versus 0.927, z statistic = -0.178, P = .859). The validity of liver stiffness measurements made by 2D SWE for assessing advanced hepatic fibrosis may be judged using CVs, and when the CV is less than 20% it can be considered "more reliable" than using IQR/M of less than 0.21. © 2017 by the American Institute of Ultrasound in Medicine.

  17. Beyond repeated-measures analysis of variance: advanced statistical methods for the analysis of longitudinal data in anesthesia research.

    PubMed

    Ma, Yan; Mazumdar, Madhu; Memtsoudis, Stavros G

    2012-01-01

    Research in the field of anesthesiology relies heavily on longitudinal designs for answering questions about long-term efficacy and safety of various anesthetic and pain regimens. Yet, anesthesiology research is lagging in the use of advanced statistical methods for analyzing longitudinal data. The goal of this article was to increase awareness of the advantages of modern statistical methods and promote their use in anesthesia research. Here we introduce 2 modern and advanced statistical methods for analyzing longitudinal data: the generalized estimating equations (GEE) and mixed-effects models (MEM). These methods were compared with the conventional repeated-measures analysis of variance (RM-ANOVA) through a clinical example with 2 types of end points (continuous and binary). In addition, we compared GEE and MEM to RM-ANOVA through a simulation study with varying sample sizes, varying number of repeated measures, and scenarios with and without missing data. In the clinical study, the 3 methods are found to be similar in terms of statistical estimation, whereas the parameter interpretations are somewhat different. The simulation study shows that the methods of GEE and MEM are more efficient in that they are able to achieve higher power with smaller sample size or lower number of repeated measurements in both complete and missing data scenarios. Based on their advantages over RM-ANOVA, GEE and MEM should be strongly considered for the analysis of longitudinal data. In particular, GEE should be used to explore overall average effects, and MEM should be used when subject-specific effects (in addition to overall average effects) are of primary interest.

  18. Bias Reduction in Estimating Variance Components of Phytoplankton Existence at Na Thap River Based on Logistics Linear Mixed Models

    NASA Astrophysics Data System (ADS)

    Arisanti, R.; Notodiputro, K. A.; Sadik, K.; Lim, A.

    2017-03-01

    There are two approaches in estimating variance components, i.e. linearity and integral approaches. However the estimates of variance components produced by both methods are known to be biased. Firth (1993) has introduced parameter estimation for correcting the bias of the maximum likelihood estimates. This method is within the class of linear models, especially the Restricted Maximum Likelihood (REML) method, and the resulting estimator is known as the Firth estimator. In this paper we discuss the bias correction method applied to a logistic linear mixed model in analyzing the existence of Synedra phytoplankton along Na Thap river in Thailand. The Firth adjusted Maximum Likelihood Estimation (MLE) is similar to REML but it shows the characteristic of generalized linear mixed model. We evaluated the Firth adjustment method by means of simulations and the result showed that the unadjusted MLE produced 95% confidence intervals which were narrower when compare to the Firth method. However, the probability coverage of the interval for unadjusted MLE was lower than 95%, whereas for the Firth method the probability coverage is approximately 95%. These results were also consistent with the variance estimation of the Synedra phytoplankton existence. It was shown that the variance estimates of Firth adjusted MLE was lower than the unadjusted MLE.

  19. Oxidation-Reduction Resistance of Advanced Copper Alloys

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. (Technical Monitor); Thomas-Ogbuji, L.; Humphrey, D. L.; Setlock, J. A.

    2003-01-01

    Resistance to oxidation and blanching is a key issue for advanced copper alloys under development for NASA's next generation of reusable launch vehicles. Candidate alloys, including dispersion-strengthened Cu-Cr-Nb, solution-strengthened Cu-Ag-Zr, and ODS Cu-Al2O3, are being evaluated for oxidation resistance by static TGA exposures in low-p(O2) and cyclic oxidation in air, and by cyclic oxidation-reduction exposures (using air for oxidation and CO/CO2 or H2/Ar for reduction) to simulate expected service environments. The test protocol and results are presented.

  20. 20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION FACILITY USED TO CUT PLUTONIUM CONTAMINATED GLOVE BOXES AND MISCELLANEOUS LARGE EQUIPMENT DOWN TO AN EASILY PACKAGED SIZE FOR DISPOSAL. ROUTINE OPERATIONS WERE PERFORMED REMOTELY, USING HOISTS, MANIPULATOR ARMS, AND GLOVE PORTS TO REDUCE BOTH INTENSITY AND TIME OF RADIATION EXPOSURE TO THE OPERATOR. (11/6/86) - Rocky Flats Plant, Plutonium Fabrication, Central section of Plant, Golden, Jefferson County, CO

  1. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  2. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  3. Advances in Photocatalytic CO2 Reduction with Water: A Review

    PubMed Central

    Nahar, Samsun; Zain, M. F. M.; Kadhum, Abdul Amir H.; Hasan, Hassimi Abu; Hasan, Md. Riad

    2017-01-01

    In recent years, the increasing level of CO2 in the atmosphere has not only contributed to global warming but has also triggered considerable interest in photocatalytic reduction of CO2. The reduction of CO2 with H2O using sunlight is an innovative way to solve the current growing environmental challenges. This paper reviews the basic principles of photocatalysis and photocatalytic CO2 reduction, discusses the measures of the photocatalytic efficiency and summarizes current advances in the exploration of this technology using different types of semiconductor photocatalysts, such as TiO2 and modified TiO2, layered-perovskite Ag/ALa4Ti4O15 (A = Ca, Ba, Sr), ferroelectric LiNbO3, and plasmonic photocatalysts. Visible light harvesting, novel plasmonic photocatalysts offer potential solutions for some of the main drawbacks in this reduction process. Effective plasmonic photocatalysts that have shown reduction activities towards CO2 with H2O are highlighted here. Although this technology is still at an embryonic stage, further studies with standard theoretical and comprehensive format are suggested to develop photocatalysts with high production rates and selectivity. Based on the collected results, the immense prospects and opportunities that exist in this technique are also reviewed here. PMID:28772988

  4. NASA Noise Reduction Program for Advanced Subsonic Transports

    NASA Technical Reports Server (NTRS)

    Stephens, David G.; Cazier, F. W., Jr.

    1995-01-01

    Aircraft noise is an important byproduct of the world's air transportation system. Because of growing public interest and sensitivity to noise, noise reduction technology is becoming increasingly important to the unconstrained growth and utilization of the air transportation system. Unless noise technology keeps pace with public demands, noise restrictions at the international, national and/or local levels may unduly constrain the growth and capacity of the system to serve the public. In recognition of the importance of noise technology to the future of air transportation as well as the viability and competitiveness of the aircraft that operate within the system, NASA, the FAA and the industry have developed noise reduction technology programs having application to virtually all classes of subsonic and supersonic aircraft envisioned to operate far into the 21st century. The purpose of this paper is to describe the scope and focus of the Advanced Subsonic Technology Noise Reduction program with emphasis on the advanced technologies that form the foundation of the program.

  5. Fluid Mechanics, Drag Reduction and Advanced Configuration Aeronautics

    NASA Technical Reports Server (NTRS)

    Bushnell, Dennis M.

    2000-01-01

    This paper discusses Advanced Aircraft configurational approaches across the speed range, which are either enabled, or greatly enhanced, by clever Flow Control. Configurations considered include Channel Wings with circulation control for VTOL (but non-hovering) operation with high cruise speed, strut-braced CTOL transports with wingtip engines and extensive ('natural') laminar flow control, a midwing double fuselage CTOL approach utilizing several synergistic methods for drag-due-to-lift reduction, a supersonic strut-braced configuration with order of twice the L/D of current approaches and a very advanced, highly engine flow-path-integrated hypersonic cruise machine. This paper indicates both the promise of synergistic flow control approaches as enablers for 'Revolutions' in aircraft performance and fluid mechanic 'areas of ignorance' which impede their realization and provide 'target-rich' opportunities for Fluids Research.

  6. Recent advances in the kinetics of oxygen reduction

    SciTech Connect

    Adzic, R.

    1996-07-01

    Oxygen reduction is considered an important electrocatalytic reaction; the most notable need remains improvement of the catalytic activity of existing metal electrocatalysts and development of new ones. A review is given of new advances in the understanding of reaction kinetics and improvements of the electrocatalytic properties of some surfaces, with focus on recent studies of relationship of the surface properties to its activity and reaction kinetics. The urgent need is to improve catalytic activity of Pt and synthesize new, possibly non- noble metal catalysts. New experimental techniques for obtaining new level of information include various {ital in situ} spectroscopies and scanning probes, some involving synchrotron radiation. 138 refs, 18 figs, 2 tabs.

  7. Three Averaging Techniques for Reduction of Antenna Temperature Variance Measured by a Dicke Mode, C-Band Radiometer

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Lawrence, Roland W.

    2000-01-01

    As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.

  8. Potential for Landing Gear Noise Reduction on Advanced Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Nickol, Craig L.; Burley, Casey L.; Guo, Yueping

    2016-01-01

    The potential of significantly reducing aircraft landing gear noise is explored for aircraft configurations with engines installed above the wings or the fuselage. An innovative concept is studied that does not alter the main gear assembly itself but does shorten the main strut and integrates the gear in pods whose interior surfaces are treated with acoustic liner. The concept is meant to achieve maximum noise reduction so that main landing gears can be eliminated as a major source of airframe noise. By applying this concept to an aircraft configuration with 2025 entry-into-service technology levels, it is shown that compared to noise levels of current technology, the main gear noise can be reduced by 10 EPNL dB, bringing the main gear noise close to a floor established by other components such as the nose gear. The assessment of the noise reduction potential accounts for design features for the advanced aircraft configuration and includes the effects of local flow velocity in and around the pods, gear noise reflection from the airframe, and reflection and attenuation from acoustic liner treatment on pod surfaces and doors. A technical roadmap for maturing this concept is discussed, and the possible drag increase at cruise due to the addition of the pods is identified as a challenge, which needs to be quantified and minimized possibly with the combination of detailed design and application of drag reduction technologies.

  9. Active Vibration Reduction of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Metscher, Jonathan F.; Schifer, Nicholas A.

    2016-01-01

    Stirling Radioisotope Power Systems (RPS) are being developed as an option to provide power on future space science missions where robotic spacecraft will orbit, flyby, land or rove. A Stirling Radioisotope Generator (SRG) could offer space missions a more efficient power system that uses one fourth of the nuclear fuel and decreases the thermal footprint compared to the current state of the art. The Stirling Cycle Technology Development (SCTD) Project is funded by the RPS Program to developing Stirling-based subsystems, including convertors and controller maturation efforts that have resulted in high fidelity hardware like the Advanced Stirling Radioisotope Generator (ASRG), Advanced Stirling Convertor (ASC), and ASC Controller Unit (ACU). The SCTD Project also performs research to develop less mature technologies with a wide variety of objectives, including increasing temperature capability to enable new environments, improving system reliability or fault tolerance, reducing mass or size, and developing advanced concepts that are mission enabling. Active vibration reduction systems (AVRS), or "balancers", have historically been developed and characterized to provide fault tolerance for generator designs that incorporate dual-opposed Stirling convertors or enable single convertor, or small RPS, missions. Balancers reduce the dynamic disturbance forces created by the power piston and displacer internal moving components of a single operating convertor to meet spacecraft requirements for induced disturbance force. To improve fault tolerance for dual-opposed configurations and enable single convertor configurations, a breadboard AVRS was implemented on the Advanced Stirling Convertor (ASC). The AVRS included a linear motor, a motor mount, and a closed-loop controller able to balance out the transmitted peak dynamic disturbance using acceleration feedback. Test objectives included quantifying power and mass penalty and reduction in transmitted force over a range of ASC

  10. Active Vibration Reduction of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Metscher, Jonathan F.; Schifer, Nicholas A.

    2016-01-01

    Stirling Radioisotope Power Systems (RPS) are being developed as an option to provide power on future space science missions where robotic spacecraft will orbit, flyby, land or rove. A Stirling Radioisotope Generator (SRG) could offer space missions a more efficient power system that uses one fourth of the nuclear fuel and decreases the thermal footprint compared to the current state of the art. The Stirling Cycle Technology Development (SCTD) Project is funded by the RPS Program to developing Stirling-based subsystems, including convertors and controller maturation efforts that have resulted in high fidelity hardware like the Advanced Stirling Radioisotope Generator (ASRG), Advanced Stirling Convertor (ASC), and ASC Controller Unit (ACU). The SCTD Project also performs research to develop less mature technologies with a wide variety of objectives, including increasing temperature capability to enable new environments, improving system reliability or fault tolerance, reducing mass or size, and developing advanced concepts that are mission enabling. Active vibration reduction systems (AVRS), or "balancers", have historically been developed and characterized to provide fault tolerance for generator designs that incorporate dual-opposed Stirling convertors or enable single convertor, or small RPS, missions. Balancers reduce the dynamic disturbance forces created by the power piston and displacer internal moving components of a single operating convertor to meet spacecraft requirements for induced disturbance force. To improve fault tolerance for dual-opposed configurations and enable single convertor configurations, a breadboard AVRS was implemented on the Advanced Stirling Convertor (ASC). The AVRS included a linear motor, a motor mount, and a closed-loop controller able to balance out the transmitted peak dynamic disturbance using acceleration feedback. Test objectives included quantifying power and mass penalty and reduction in transmitted force over a range of ASC

  11. Advancing Development and Greenhouse Gas Reductions in Vietnam's Wind Sector

    SciTech Connect

    Bilello, D.; Katz, J.; Esterly, S.; Ogonowski, M.

    2014-09-01

    Clean energy development is a key component of Vietnam's Green Growth Strategy, which establishes a target to reduce greenhouse gas (GHG) emissions from domestic energy activities by 20-30 percent by 2030 relative to a business-as-usual scenario. Vietnam has significant wind energy resources, which, if developed, could help the country reach this target while providing ancillary economic, social, and environmental benefits. Given Vietnam's ambitious clean energy goals and the relatively nascent state of wind energy development in the country, this paper seeks to fulfill two primary objectives: to distill timely and useful information to provincial-level planners, analysts, and project developers as they evaluate opportunities to develop local wind resources; and, to provide insights to policymakers on how coordinated efforts may help advance large-scale wind development, deliver near-term GHG emission reductions, and promote national objectives in the context of a low emission development framework.

  12. Biologic lung volume reduction therapy for advanced homogeneous emphysema.

    PubMed

    Refaely, Y; Dransfield, M; Kramer, M R; Gotfried, M; Leeds, W; McLennan, G; Tewari, S; Krasna, M; Criner, G J

    2010-07-01

    This report summarises phase 2 trial results of biologic lung volume reduction (BioLVR) for treatment of advanced homogeneous emphysema. BioLVR therapy was administered bronchoscopically to 25 patients with homogeneous emphysema in an open-labelled study. Eight patients received low dose (LD) treatment with 10 mL per site at eight subsegments; 17 received high dose (HD) treatment with 20 mL per site at eight subsegments. Safety was assessed in terms of medical complications during 6-month follow-up. Efficacy was assessed in terms of change from baseline in gas trapping, spirometry, diffusing capacity, exercise capacity, dyspnoea and health-related quality of life. There were no deaths or serious medical complications during the study. A statistically significant reduction in gas trapping was observed at 3-month follow-up among HD patients, but not LD patients. At 6 months, changes from baseline in forced expiratory volume in 1 s (-8.0+/-13.93% versus +13.8+/-20.26%), forced vital capacity (-3.9+/-9.41% versus +9.0+/-13.01%), residual volume/total lung capacity ratio (-1.4+/-13.82% versus -5.4+/-12.14%), dyspnoea scores (-0.4+/-1.27 versus -0.8+/-0.73 units) and St George's Respiratory Questionnaire total domain scores (-4.9+/-8.3 U versus -12.2+/-12.38 units) were better with HD than with LD therapy. BioLVR therapy with 20 mL per site at eight subsegmental sites may be a safe and effective therapy in patients with advanced homogeneous emphysema.

  13. Virus Reduction during Advanced Bardenpho and Conventional Wastewater Treatment Processes.

    PubMed

    Schmitz, Bradley W; Kitajima, Masaaki; Campillo, Maria E; Gerba, Charles P; Pepper, Ian L

    2016-09-06

    The present study investigated wastewater treatment for the removal of 11 different virus types (pepper mild mottle virus; Aichi virus; genogroup I, II, and IV noroviruses; enterovirus; sapovirus; group-A rotavirus; adenovirus; and JC and BK polyomaviruses) by two wastewater treatment facilities utilizing advanced Bardenpho technology and compared the results with conventional treatment processes. To our knowledge, this is the first study comparing full-scale treatment processes that all received sewage influent from the same region. The incidence of viruses in wastewater was assessed with respect to absolute abundance, occurrence, and reduction in monthly samples collected throughout a 12 month period in southern Arizona. Samples were concentrated via an electronegative filter method and quantified using TaqMan-based quantitative polymerase chain reaction (qPCR). Results suggest that Plant D, utilizing an advanced Bardenpho process as secondary treatment, effectively reduced pathogenic viruses better than facilities using conventional processes. However, the absence of cell-culture assays did not allow an accurate assessment of infective viruses. On the basis of these data, the Aichi virus is suggested as a conservative viral marker for adequate wastewater treatment, as it most often showed the best correlation coefficients to viral pathogens, was always detected at higher concentrations, and may overestimate the potential virus risk.

  14. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  15. Analytic investigation of advancing blade drag reduction by tip modifications

    NASA Technical Reports Server (NTRS)

    Tauber, M. E.

    1978-01-01

    Analytic techniques were applied to study the effect on the performance of the nonlifting advancing blade when the outboard 5% of the blade is modified to reduce drag. The tip modifications studied consisted of reducing airfoil thickness, sweepback, and planform taper. The reductions in instantaneous drag and torque were calculated for tip speed ratios from about 0.19 to 0.30, corresponding to advancing blade tip Mach numbers of 0.855 to 0.936, respectively. Approximations required in the analysis introduce uncertainties into the computed absolute values of drag and torque; however, the differences in the quantities should be a fairly reliable measure of the effect of changing tip geometry. For example, at the highest tip speed, instantaneous drag, and torque were reduced by 20% and 24%, respectively, for tip sweep of 40 deg on a blade using an NACA 0010 airfoil and by comparable amounts for 30-deg sweep on a blade having an NACA 0012 airfoil section. The present method should prove to be a useful, inexpensive technique for identifying promising configurations for additional study and testing.

  16. Advances in volcano monitoring and risk reduction in Latin America

    NASA Astrophysics Data System (ADS)

    McCausland, W. A.; White, R. A.; Lockhart, A. B.; Marso, J. N.; Assitance Program, V. D.; Volcano Observatories, L. A.

    2014-12-01

    We describe results of cooperative work that advanced volcanic monitoring and risk reduction. The USGS-USAID Volcano Disaster Assistance Program (VDAP) was initiated in 1986 after disastrous lahars during the 1985 eruption of Nevado del Ruiz dramatizedthe need to advance international capabilities in volcanic monitoring, eruption forecasting and hazard communication. For the past 28 years, VDAP has worked with our partners to improve observatories, strengthen monitoring networks, and train observatory personnel. We highlight a few of the many accomplishments by Latin American volcano observatories. Advances in monitoring, assessment and communication, and lessons learned from the lahars of the 1985 Nevado del Ruiz eruption and the 1994 Paez earthquake enabled the Servicio Geológico Colombiano to issue timely, life-saving warnings for 3 large syn-eruptive lahars at Nevado del Huila in 2007 and 2008. In Chile, the 2008 eruption of Chaitén prompted SERNAGEOMIN to complete a national volcanic vulnerability assessment that led to a major increase in volcano monitoring. Throughout Latin America improved seismic networks now telemeter data to observatories where the decades-long background rates and types of seismicity have been characterized at over 50 volcanoes. Standardization of the Earthworm data acquisition system has enabled data sharing across international boundaries, of paramount importance during both regional tectonic earthquakes and during volcanic crises when vulnerabilities cross international borders. Sharing of seismic forecasting methods led to the formation of the international organization of Latin American Volcano Seismologists (LAVAS). LAVAS courses and other VDAP training sessions have led to international sharing of methods to forecast eruptions through recognition of precursors and to reduce vulnerabilities from all volcano hazards (flows, falls, surges, gas) through hazard assessment, mapping and modeling. Satellite remote sensing data

  17. Low cost biological lung volume reduction therapy for advanced emphysema

    PubMed Central

    Bakeer, Mostafa; Abdelgawad, Taha Taha; El-Metwaly, Raed; El-Morsi, Ahmed; El-Badrawy, Mohammad Khairy; El-Sharawy, Solafa

    2016-01-01

    Background Bronchoscopic lung volume reduction (BLVR), using biological agents, is one of the new alternatives to lung volume reduction surgery. Objectives To evaluate efficacy and safety of biological BLVR using low cost agents including autologous blood and fibrin glue. Methods Enrolled patients were divided into two groups: group A (seven patients) in which autologous blood was used and group B (eight patients) in which fibrin glue was used. The agents were injected through a triple lumen balloon catheter via fiberoptic bronchoscope. Changes in high resolution computerized tomography (HRCT) volumetry, pulmonary function tests, symptoms, and exercise capacity were evaluated at 12 weeks postprocedure as well as for complications. Results In group A, at 12 weeks postprocedure, there was significant improvement in the mean value of HRCT volumetry and residual volume/total lung capacity (% predicted) (P-value: <0.001 and 0.038, respectively). In group B, there was significant improvement in the mean value of HRCT volumetry and (residual volume/total lung capacity % predicted) (P-value: 0.005 and 0.004, respectively). All patients tolerated the procedure with no mortality. Conclusion BLVR using autologous blood and locally prepared fibrin glue is a promising method for therapy of advanced emphysema in term of efficacy, safety as well as cost effectiveness. PMID:27536091

  18. Advancing the research agenda for diagnostic error reduction

    PubMed Central

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-01-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process ‘pitfalls’ (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a ‘normal’ mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field. PMID:23942182

  19. Advanced Reduction Processes: A New Class of Treatment Processes

    PubMed Central

    Vellanki, Bhanu Prakash; Batchelor, Bill; Abdel-Wahab, Ahmed

    2013-01-01

    Abstract A new class of treatment processes called advanced reduction processes (ARPs) is proposed. ARPs combine activation methods and reducing agents to form highly reactive reducing radicals that degrade oxidized contaminants. Batch screening experiments were conducted to identify effective ARPs by applying several combinations of activation methods (ultraviolet light, ultrasound, electron beam, and microwaves) and reducing agents (dithionite, sulfite, ferrous iron, and sulfide) to degradation of four target contaminants (perchlorate, nitrate, perfluorooctanoic acid, and 2,4 dichlorophenol) at three pH-levels (2.4, 7.0, and 11.2). These experiments identified the combination of sulfite activated by ultraviolet light produced by a low-pressure mercury vapor lamp (UV-L) as an effective ARP. More detailed kinetic experiments were conducted with nitrate and perchlorate as target compounds, and nitrate was found to degrade more rapidly than perchlorate. Effectiveness of the UV-L/sulfite treatment process improved with increasing pH for both perchlorate and nitrate. We present the theory behind ARPs, identify potential ARPs, demonstrate their effectiveness against a wide range of contaminants, and provide basic experimental evidence in support of the fundamental hypothesis for ARP, namely, that activation methods can be applied to reductants to form reducing radicals that degrade oxidized contaminants. This article provides an introduction to ARPs along with sufficient data to identify potentially effective ARPs and the target compounds these ARPs will be most effective in destroying. Further research will provide a detailed analysis of degradation kinetics and the mechanisms of contaminant destruction in an ARP. PMID:23840160

  20. Mindfulness-Based Stress Reduction in Advanced Nursing Practice

    PubMed Central

    Williams, Hants; Simmons, Leigh Ann; Tanabe, Paula

    2015-01-01

    The aim of this article is to discuss how advanced practice nurses (APNs) can incorporate mindfulness-based stress reduction (MBSR) as a nonpharmacologic clinical tool in their practice. Over the last 30 years, patients and providers have increasingly used complementary and holistic therapies for the nonpharmacologic management of acute and chronic diseases. Mindfulness-based interventions, specifically MBSR, have been tested and applied within a variety of patient populations. There is strong evidence to support that the use of MBSR can improve a range of biological and psychological outcomes in a variety of medical illnesses, including acute and chronic pain, hypertension, and disease prevention. This article will review the many ways APNs can incorporate MBSR approaches for health promotion and disease/symptom management into their practice. We conclude with a discussion of how nurses can obtain training and certification in MBSR. Given the significant and growing literature supporting the use of MBSR in the prevention and treatment of chronic disease, increased attention on how APNs can incorporate MBSR into clinical practice is necessary. PMID:25673578

  1. Space Launch System NASA Research Announcement Advanced Booster Engineering Demonstration and/or Risk Reduction

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Craig, Kellie D.

    2011-01-01

    The intent of the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort is to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Key Concepts (1) Offerors must propose an Advanced Booster concept that meets SLS Program requirements (2) Engineering Demonstration and/or Risk Reduction must relate to the Offeror s Advanced Booster concept (3) NASA Research Announcement (NRA) will not be prescriptive in defining Engineering Demonstration and/or Risk Reduction

  2. Development of new source diagnostic methods and variance reduction techniques for Monte Carlo eigenvalue problems with a focus on high dominance ratio problems

    NASA Astrophysics Data System (ADS)

    Wenner, Michael T.

    Obtaining the solution to the linear Boltzmann equation is often is often a daunting task. The time-independent form is an equation of six independent variables which cannot be solved analytically in all but some special problems. Instead, numerical approaches have been devised. This work focuses on improving Monte Carlo methods for its solution in eigenvalue form. First, a statistical method of stationarity detection called the KPSS test adapted as a Monte Carlo eigenvalue source convergence test. The KPSS test analyzes the source center of mass series which was chosen since it should be indicative of overall source behavior, and is physically easy to understand. A source center of mass plot alone serves as a good visual source convergence diagnostic. The KPSS test and three different information theoretic diagnostics were implemented into the well known KENOV.a code inside of the SCALE (version 5) code package from Oak Ridge National Laboratory and compared through analysis of a simple problem and several difficult source convergence benchmarks. Results showed that the KPSS test can add to the overall confidence by identifying more problematic simulations than without its usage. Not only this, the source center of mass information on hand visually aids in the understanding of the problem physics. The second major focus of this dissertation concerned variance reduction methodologies for Monte Carlo eigenvalue problems. The CADIS methodology, based on importance sampling, was adapted to the eigenvalue problems. It was shown that the straight adaption of importance sampling can provide a significant variance reduction in determination of keff (in cases studied up to 30%?). A modified version of this methodology was developed which utilizes independent deterministic importance simulations. In this new methodology, each particle is simulated multiple times, once to every other discretized source region utilizing the importance for that region only. Since each particle

  3. 48 CFR 970.3200-1 - Reduction or suspension of advance, partial, or progress payments.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Reduction or suspension of... Contract Financing 970.3200-1 Reduction or suspension of advance, partial, or progress payments. (a) The procedures prescribed at 48 CFR 32.006 shall be followed regarding the reduction or suspension of payments...

  4. Advanced supersonic propulsion study. [with emphasis on noise level reduction

    NASA Technical Reports Server (NTRS)

    Sabatella, J. A. (Editor)

    1974-01-01

    A study was conducted to determine the promising propulsion systems for advanced supersonic transport application, and to identify the critical propulsion technology requirements. It is shown that noise constraints have a major effect on the selection of the various engine types and cycle parameters. Several promising advanced propulsion systems were identified which show the potential of achieving lower levels of sideline jet noise than the first generation supersonic transport systems. The non-afterburning turbojet engine, utilizing a very high level of jet suppression, shows the potential to achieve FAR 36 noise level. The duct-heating turbofan with a low level of jet suppression is the most attractive engine for noise levels from FAR 36 to FAR 36 minus 5 EPNdb, and some series/parallel variable cycle engines show the potential of achieving noise levels down to FAR 36 minus 10 EPNdb with moderate additional penalty. The study also shows that an advanced supersonic commercial transport would benefit appreciably from advanced propulsion technology. The critical propulsion technology needed for a viable supersonic propulsion system, and the required specific propulsion technology programs are outlined.

  5. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  6. Advances in reduction techniques for tire contact problems

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    1995-08-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  7. Advanced Fluid Research On Drag reduction In Turbulence Experiments -- AFRODITE

    NASA Astrophysics Data System (ADS)

    Fransson, Jens H. M.

    2011-11-01

    A hot topic in today's debate on global warming is drag reduction in aeronautics. The most beneficial concept for drag reduction is to maintain the major portion of the airfoil laminar. Estimations show that the potential drag reduction can be as much as 15%, which would give a significant reduction of NOx and CO emissions in the atmosphere considering that the number of aircraft take offs, only in the EU, is over 19 million per year. In previous tuned wind tunnel measurements it has been shown that roughness elements can be used to sensibly delay transition to turbulence. Fransson et al. 2006 Phys. Rev. Lett. 96, 064501. The result is revolutionary, since the common belief has been that surface roughness causes earlier transition and in turn increases the drag, and is a proof of concept of the passive control method per se. The beauty with a passive control technique is that no external energy has to be added to the flow system in order to perform the control, instead one uses the existing energy in the flow. Within the research programme AFRODITE, funded by ERC, we will take this passive control method to the next level by making it twofold, more persistent and more robust. Financial support from the European Research Council (ERC) is acknowledged.

  8. Principled Variance Reduction Techniques for Real Time Patient-Specific Monte Carlo Applications within Brachytherapy and Cone-Beam Computed Tomography

    NASA Astrophysics Data System (ADS)

    Sampson, Andrew Joseph

    This dissertation describes the application of two principled variance reduction strategies to increase the efficiency for two applications within medical physics. The first, called correlated Monte Carlo (CMC) applies to patient-specific, permanent-seed brachytherapy (PSB) dose calculations. The second, called adjoint-biased forward Monte Carlo (ABFMC), is used to compute cone-beam computed tomography (CBCT) scatter projections. CMC was applied for two PSB cases: a clinical post-implant prostate, and a breast with a simulated lumpectomy cavity. CMC computes the dose difference, DeltaD, between the highly correlated dose computing homogeneous and heterogeneous geometries. The particle transport in the heterogeneous geometry assumed a purely homogeneous environment, and altered particle weights accounted for bias. Average gains of 37 to 60 are reported from using CMC, relative to un-correlated Monte Carlo (UMC) calculations, for the prostate and breast CTV's, respectively. To further increase the efficiency up to 1500 fold above UMC, an approximation called interpolated correlated Monte Carlo (ICMC) was applied. ICMC computes DeltaD using CMC on a low-resolution (LR) spatial grid followed by interpolation to a high-resolution (HR) voxel grid followed. The interpolated, HR DeltaD is then summed with a HR, pre-computed, homogeneous dose map. ICMC computes an approximate, but accurate, HR heterogeneous dose distribution from LR MC calculations achieving an average 2% standard deviation within the prostate and breast CTV's in 1.1 sec and 0.39 sec, respectively. Accuracy for 80% of the voxels using ICMC is within 3% for anatomically realistic geometries. Second, for CBCT scatter projections, ABFMC was implemented via weight windowing using a solution to the adjoint Boltzmann transport equation computed either via the discrete ordinates method (DOM), or a MC implemented forward-adjoint importance generator (FAIG). ABFMC, implemented via DOM or FAIG, was tested for a

  9. Recent advancements in mechanical reduction methods: particulate systems.

    PubMed

    Leleux, Jardin; Williams, Robert O

    2014-03-01

    The screening of new active pharmaceutical ingredients (APIs) has become more streamlined and as a result the number of new drugs in the pipeline is steadily increasing. However, a major limiting factor of new API approval and market introduction is the low solubility associated with a large percentage of these new drugs. While many modification strategies have been studied to improve solubility such as salt formation and addition of cosolvents, most provide only marginal success and have severe disadvantages. One of the most successful methods to date is the mechanical reduction of drug particle size, inherently increasing the surface area of the particles and, as described by the Noyes-Whitney equation, the dissolution rate. Drug micronization has been the gold standard to achieve these improvements; however, the extremely low solubility of some new chemical entities is not significantly affected by size reduction in this range. A reduction in size to the nanometric scale is necessary. Bottom-up and top-down techniques are utilized to produce drug crystals in this size range; however, as discussed in this review, top-down approaches have provided greater enhancements in drug usability on the industrial scale. The six FDA approved products that all exploit top-down approaches confirm this. In this review, the advantages and disadvantages of both approaches will be discussed in addition to specific top-down techniques and the improvements they contribute to the pharmaceutical field.

  10. Noise exposure reduction of advanced high-lift systems

    NASA Technical Reports Server (NTRS)

    Haffner, Stephen W.

    1995-01-01

    The purpose of NASA Contract NAS1-20090 Task 3 was to investigate the potential for noise reduction that would result from improving the high-lift performance of conventional subsonic transports. The study showed that an increase in lift-to-drag ratio of 15 percent would reduce certification noise levels by about 2 EPNdB on approach, 1.5 EPNdB on cutback, and zero EPNdB on sideline. In most cases, noise contour areas would be reduced by 10 to 20 percent.

  11. Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells

    NASA Astrophysics Data System (ADS)

    Ali, Mohamed Mahmoud; Kvande, Halvor

    2017-02-01

    There are two mainpreheating methods that are used nowadays for aluminum reduction cells. One is based on electrical resistance preheating with a thin bed of small coke and/or graphite particles between the anodes and the cathode carbon blocks. The other is flame preheating, where two or more gas or oil burners are used. Electrical resistance preheating is the oldest method, but is still frequently used by different aluminum producers. Many improvements have been made to this method by different companies over the last decade. In this paper, important points pertaining to the preparation and preheating of these cells, as well as measurements made during the preheating process and evaluation of the performance of the preheating, are illustrated. The preheating times of these cells were found to be between 36 h and 96 h for cell currents between 176 kA and 406 kA, while the resistance bed thickness was between 13 mm and 60 mm. The average cathode surface temperature at the end of the preheating was usually between 800°C and 950°C. The effect of the preheating methods on cell life is unclear and no quantifiable conclusions can be drawn. Some works carried out in the mathematical modeling area are also discussed. It is concluded that there is a need for more studies with real situations for preheated cells on the basis of actual measurements. The expected development in electrical resistance preheating of aluminum reduction cells is also summarized.

  12. Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells

    NASA Astrophysics Data System (ADS)

    Ali, Mohamed Mahmoud; Kvande, Halvor

    2016-06-01

    ABSTRACT There are two mainpreheating methods that are used nowadays for aluminum reduction cells. One is based on electrical resistance preheating with a thin bed of small coke and/or graphite particles between the anodes and the cathode carbon blocks. The other is flame preheating, where two or more gas or oil burners are used. Electrical resistance preheating is the oldest method, but is still frequently used by different aluminum producers. Many improvements have been made to this method by different companies over the last decade. In this paper, important points pertaining to the preparation and preheating of these cells, as well as measurements made during the preheating process and evaluation of the performance of the preheating, are illustrated. The preheating times of these cells were found to be between 36 h and 96 h for cell currents between 176 kA and 406 kA, while the resistance bed thickness was between 13 mm and 60 mm. The average cathode surface temperature at the end of the preheating was usually between 800°C and 950°C. The effect of the preheating methods on cell life is unclear and no quantifiable conclusions can be drawn. Some works carried out in the mathematical modeling area are also discussed. It is concluded that there is a need for more studies with real situations for preheated cells on the basis of actual measurements. The expected development in electrical resistance preheating of aluminum reduction cells is also summarized.

  13. An advanced carbon reactor subsystem for carbon dioxide reduction

    NASA Technical Reports Server (NTRS)

    Noyes, Gary P.; Cusick, Robert J.

    1986-01-01

    An evaluation is presented of the development status of an advanced carbon-reactor subsystem (ACRS) for the production of water and dense, solid carbon from CO2 and hydrogen, as required in physiochemical air revitalization systems for long-duration manned space missions. The ACRS consists of a Sabatier Methanation Reactor (SMR) that reduces CO2 with hydrogen to form methane and water, a gas-liquid separator to remove product water from the methane, and a Carbon Formation Reactor (CFR) to pyrolize methane to carbon and hydrogen; the carbon is recycled to the SMR, while the produce carbon is periodically removed from the CFR. A preprototype ACRS under development for the NASA Space Station is described.

  14. Variance Reduction in a Dataset of Normal Macular Ganglion Cell Plus Inner Plexiform Layer Thickness Maps with Application to Glaucoma Diagnosis

    PubMed Central

    Knighton, Robert W.; Gregori, Giovanni; Budenz, Donald L.

    2012-01-01

    Purpose. To examine the similarities and differences in the shape of the macular ganglion cell plus inner plexiform layers (GCL+IPL) in a healthy human population, and seek methods to reduce population variance and improve discriminating power. Methods. Macular images of the right eyes of 23 healthy subjects were obtained with spectral domain optical coherence tomography. The thickness of GCL+IPL was determined by manual segmentation, areas with blood vessels were removed, and the resulting maps were fit by smooth surfaces in polar coordinates centered on the fovea. Results. The mean GCL+IPL thickness formed a horizontal elliptical annulus. The variance increased toward the center and was highest near the foveal edge. Individual maps differed in foveal size and overall GCL+IPL thickness. Foveal size correction by radially shifting individual maps to the same foveal size as the mean map reduced perifoveal variance. Thickness alignment by shifting individual maps axially, then radially, to match the mean map reduced overall variance. These transformations had very little effect on the population mean. Conclusions. Simple transformations of individual GCL+IPL thickness maps to a canonical form can considerably reduce the population variance in a sample of normal eyes, likely improving the ability to discriminate abnormal maps. The transformations considered here preserve the local geometry of the thickness maps. When used on a patient's map, they can produce a deviation map that provides a meaningful measurement of the size of local thickness deviations and allows estimation of the number of ganglion cells lost in a glaucomatous defect. PMID:22562512

  15. Development of an advanced Sabatier CO2 reduction subsystem

    NASA Technical Reports Server (NTRS)

    Kleiner, G. N.; Cusick, R. J.

    1981-01-01

    A preprototype Sabatier CO2 reduction subsystem was successfully designed, fabricated and tested. The lightweight, quick starting (less than 5 minutes) reactor utlizes a highly active and physically durable methanation catalyst composed of ruthenium on alumina. The use of this improved catalyst permits a simple, passively controlled reactor design with an average lean component H2/CO2 conversion efficiency of over 99% over a range of H2/CO2 molar ratios of 1.8 to 5 while operating with process flows equivalent to a crew size of up to five persons. The subsystem requires no heater operation after start-up even during simulated 55 minute lightside/39 minute darkside orbital operation.

  16. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andy; Greene, William D.

    2017-01-01

    Goals of NASA's Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS. (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. SLS Block 1 vehicle is being designed to carry 70 mT to LEO: (1) Uses two five-segment solid rocket boosters (SRBs) similar to the boosters that helped power the space shuttle to orbit. Evolved 130 mT payload class rocket requires an advanced booster with more thrust than any existing U.S. liquid-or solid-fueled boosters

  17. Reduction in Clinical Variance Using Targeted Design Changes in Computerized Provider Order Entry (CPOE) Order Sets: Impact on Hospitalized Children with Acute Asthma Exacerbation.

    PubMed

    Jacobs, B R; Hart, K W; Rucker, D W

    2012-01-01

    Unwarranted variance in healthcare has been associated with prolonged length of stay, diminished health and increased cost. Practice variance in the management of asthma can be significant and few investigators have evaluated strategies to reduce this variance. We hypothesized that selective redesign of order sets using different ways to frame the order and physician decision-making in a computerized provider order entry system could increase adherence to evidence-based care and reduce population-specific variance. The study focused on the use of an evidence-based asthma exacerbation order set in the electronic health record (EHR) before and after order set redesign. In the Baseline period, the EHR was queried for frequency of use of an asthma exacerbation order set and its individual orders. Important individual orders with suboptimal use were targeted for redesign. Data from a Post-Intervention period were then analyzed. In the Baseline period there were 245 patient visits in which the acute asthma exacerbation order set was selected. The utilization frequency of most orders in the order set during this period exceeded 90%. Three care items were targeted for intervention due to suboptimal utilization: admission weight, activity center use and peak flow measurements. In the Post-Intervention period there were 213 patient visits. Order set redesign using different default order content resulted in significant improvement in the utilization of orders for all 3 items: admission weight (79.2% to 94.8% utilization, p<0.001), activity center (84.1% to 95.3% utilization, p<0.001) and peak flow (18.8% to 55.9% utilization, p<0.001). Utilization of peak flow orders for children ≥8 years of age increased from 42.7% to 94.1% (p<0.001). Details of order set design greatly influence clinician prescribing behavior. Queries of the EHR reveal variance associated with ordering frequencies. Targeting and changing order set design elements in a CPOE system results in improved

  18. Advanced Exploration Systems (AES) Logistics Reduction and Repurposing Project: Advanced Clothing Ground Study Final Report

    NASA Technical Reports Server (NTRS)

    Byrne, Vicky; Orndoff, Evelyne; Poritz, Darwin; Schlesinger, Thilini

    2013-01-01

    All human space missions require significant logistical mass and volume that will become an excessive burden for long duration missions beyond low Earth orbit. The goal of the Advanced Exploration Systems (AES) Logistics Reduction & Repurposing (LRR) project is to bring new ideas and technologies that will enable human presence in farther regions of space. The LRR project has five tasks: 1) Advanced Clothing System (ACS) to reduce clothing mass and volume, 2) Logistics to Living (L2L) to repurpose existing cargo, 3) Heat Melt Compactor (HMC) to reprocess materials in space, 4) Trash to Gas (TTG) to extract useful gases from trash, and 5) Systems Engineering and Integration (SE&I) to integrate these logistical components. The current International Space Station (ISS) crew wardrobe has already evolved not only to reduce some of the logistical burden but also to address crew preference. The ACS task is to find ways to further reduce this logistical burden while examining human response to different types of clothes. The ACS task has been broken into a series of studies on length of wear of various garments: 1) three small studies conducted through other NASA projects (MMSEV, DSH, HI-SEAS) focusing on length of wear of garments treated with an antimicrobial finish; 2) a ground study, which is the subject of this report, addressing both length of wear and subject perception of various types of garments worn during aerobic exercise; and 3) an ISS study replicating the ground study, and including every day clothing to collect information on perception in reduced gravity in which humans experience physiological changes. The goal of the ground study is first to measure how long people can wear the same exercise garment, depending on the type of fabric and the presence of antimicrobial treatment, and second to learn why. Human factors considerations included in the study consist of the Institutional Review Board approval, test protocol and participants' training, and a web

  19. Experiment and mechanism investigation on advanced reburning for NOx reduction: influence of CO and temperature

    PubMed Central

    Wang, Zhi-hua; Zhou, Jun-hu; Zhang, Yan-wei; Lu, Zhi-min; Fan, Jian-ren; Cen, Ke-fa

    2005-01-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 °C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 °C~1100 °C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503

  20. Short-term changes after a weight reduction intervention in advanced diabetic nephropathy.

    PubMed

    Friedman, Allon N; Chambers, Mary; Kamendulis, Lisa M; Temmerman, Joan

    2013-11-01

    Obesity precedes and is strongly linked to the development of type 2 diabetic nephropathy in most patients, yet little is known about the effects of weight reduction on this disease. This study aimed to establish proof of concept for the hypothesis that weight reduction ameliorates diabetic nephropathy. Six obese individuals with advanced diabetic nephropathy (estimated GFR <40 ml/min per 1.73 m(2), urine albumin excretion >30 mg/d) currently taking a renin-aldosterone axis inhibitor underwent a 12-week very low calorie ketogenic weight reduction diet with encouragement of exercise between March and September 2012. Albuminuria and other parameters of kidney health were the main outcome measures. There was a 12% reduction in weight (median 118.5 versus 104.3 kg, P=0.03). The intervention was associated with a 36% reduction in albuminuria that did not reach statistical significance (2124 versus 1366 mg/24 h, P=0.08) and significant reductions in the filtration markers serum creatinine (3.54 versus 3.13 mg/dl, P<0.05) and cystatin C (2.79 versus 2.46 mg/l, P<0.05). Improvements were also noted for the diabetes markers fasting glucose (166 versus 131 mg/dl, P<0.05), fasting insulin (26.9 versus 10.4 μU/ml, P<0.05), and insulin resistance (9.6 versus 4.2, P=0.03). Physical function, general health, and the number of diabetes medications also showed statistically significant signs of improvement. After a short-term intensive weight reduction intervention in patients with advanced diabetic nephropathy, improvements were observed in markers of glomerular filtration, diabetes status, and risk factors for kidney disease progression, as well as other general indicators of health and well-being.

  1. Advancements in Steel for Weight Reduction of P900 Armor Plate

    DTIC Science & Technology

    2008-12-01

    were investigated as alternatives to MIL-PRF- 32269 steel alloys for application in P900 perforated armor currently used for Army ground combat...ADVANCEMENTS IN STEEL FOR WEIGHT REDUCTION OF P900 ARMOR PLATE R. A. Howell*, J. S. Montgomery Survivability Materials Branch Army Research Lab...Aberdeen Proving Grounds , MD 21001 D.C. Van Aken Missouri University for Science and Technology Rolla, MO 65401 ABSTRACT Ballistic tests

  2. Advanced Risk Reduction Tool (ARRT) Special Case Study Report: Science and Engineering Technical Assessments (SETA) Program

    NASA Technical Reports Server (NTRS)

    Kirsch, Paul J.; Hayes, Jane; Zelinski, Lillian

    2000-01-01

    This special case study report presents the Science and Engineering Technical Assessments (SETA) team's findings for exploring the correlation between the underlying models of Advanced Risk Reduction Tool (ARRT) relative to how it identifies, estimates, and integrates Independent Verification & Validation (IV&V) activities. The special case study was conducted under the provisions of SETA Contract Task Order (CTO) 15 and the approved technical approach documented in the CTO-15 Modification #1 Task Project Plan.

  3. NASA's Space Launch System Advanced Booster Engineering Demonstration and/or Risk Reduction Efforts

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Dumbacher, Daniel L.; May, Todd A.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters (SRBs) for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, with a stated intent to reduce risks leading to an affordable advanced booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the advanced boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the advanced boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit (BEO), opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable advanced booster that meets the SLS performance requirements

  4. NASA's Space Launch System Advanced Booster Engineering Demonstration and Risk Reduction Efforts

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; May, Todd; Dumbacher, Daniel

    2012-01-01

    The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, and its stated intent was to reduce risks leading to an affordable Advanced Booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the Advanced Boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the Advanced Boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit, opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable Advanced Booster that meets the SLS performance requirements. Demonstrations and

  5. Tremor reduction by subthalamic nucleus stimulation and medication in advanced Parkinson's disease.

    PubMed

    Blahak, Christian; Wöhrle, Johannes C; Capelle, Hans-Holger; Bäzner, Hansjörg; Grips, Eva; Weigel, Ralf; Hennerici, Michael G; Krauss, Joachim K

    2007-02-01

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) has proved to be effective for tremor in Parkinson's disease (PD). Most of the recent studies used only clinical data to analyse tremor reduction. The objective of our study was to quantify tremor reduction by STN DBS and antiparkinsonian medication in elderly PD patients using an objective measuring system. Amplitude and frequency of resting tremor and re-emergent resting tremor during postural tasks were analysed using an ultrasound-based measuring system and surface electromyography. In a prospective study design nine patients with advanced PD were examined preoperatively off and on medication, and twice postoperatively during four treatment conditions: off treatment, on STN DBS, on medication, and on STN DBS plus medication. While both STN DBS and medication reduced tremor amplitude, STN DBS alone and the combination of medication and STN DBS were significantly superior to pre- and postoperative medication. STN DBS but not medication increased tremor frequency, and off treatment tremor frequency was significantly reduced postoperatively compared to baseline. These findings demonstrate that STN DBS is highly effective in elderly patients with advanced PD and moderate preoperative tremor reduction by medication. Thus, with regard to the advanced impact on the other parkinsonian symptoms, STN DBS can replace thalamic stimulation in this cohort of patients. Nevertheless, medication was still effective postoperatively and may act synergistically. The significantly superior efficacy of STN DBS on tremor amplitude and its impact on tremor frequency in contrast to medication might be explained by the influence of STN DBS on additional neural circuits independent from dopaminergic neurotransmission.

  6. Regulatory Risk Reduction for Advanced Reactor Technologies – FY2016 Status and Work Plan Summary

    SciTech Connect

    Moe, Wayne Leland

    2016-08-01

    Millions of public and private sector dollars have been invested over recent decades to realize greater efficiency, reliability, and the inherent and passive safety offered by advanced nuclear reactor technologies. However, a major challenge in experiencing those benefits resides in the existing U.S. regulatory framework. This framework governs all commercial nuclear plant construction, operations, and safety issues and is highly large light water reactor (LWR) technology centric. The framework must be modernized to effectively deal with non-LWR advanced designs if those designs are to become part of the U.S energy supply. The U.S. Department of Energy’s (DOE) Advanced Reactor Technologies (ART) Regulatory Risk Reduction (RRR) initiative, managed by the Regulatory Affairs Department at the Idaho National Laboratory (INL), is establishing a capability that can systematically retire extraneous licensing risks associated with regulatory framework incompatibilities. This capability proposes to rely heavily on the perspectives of the affected regulated community (i.e., commercial advanced reactor designers/vendors and prospective owner/operators) yet remain tuned to assuring public safety and acceptability by regulators responsible for license issuance. The extent to which broad industry perspectives are being incorporated into the proposed framework makes this initiative unique and of potential benefit to all future domestic non-LWR applicants

  7. Recent advances in the design of tailored nanomaterials for efficient oxygen reduction reaction

    DOE PAGES

    Lv, Haifeng; Li, Dongguo; Strmcnik, Dusan; ...

    2016-04-11

    In the past decade, polymer electrolyte membrane fuels (PEMFCs) have been evaluated for both automotive and stationary applications. One of the main obstacles for large scale commercialization of this technology is related to the sluggish oxygen reduction reaction that takes place on the cathode side of fuel cell. Consequently, ongoing research efforts are focused on the design of cathode materials that could improve the kinetics and durability. Majority of these efforts rely on novel synthetic approaches that provide control over the structure, size, shape and composition of catalytically active materials. This article highlights the most recent advances that have beenmore » made to tailor critical parameters of the nanoscale materials in order to achieve more efficient performance of the oxygen reduction reaction (ORR).« less

  8. Impacts of natural organic matter on perchlorate removal by an advanced reduction process.

    PubMed

    Duan, Yuhang; Batchelor, Bill

    2014-01-01

    Perchlorate can be destroyed by Advanced Reduction Processes (ARPs) that combine chemical reductants (e.g., sulfite) with activating methods (e.g., UV light) in order to produce highly reactive reducing free radicals that are capable of rapid and effective perchlorate reduction. However, natural organic matter (NOM) exists widely in the environment and has the potential to influence perchlorate reduction by ARPs that use UV light as the activating method. Batch experiments were conducted to obtain data on the impacts of NOM and wavelength of light on destruction of perchlorate by the ARPs that use sulfite activated by UV light produced by low-pressure mercury lamps (UV-L) or by KrCl excimer lamps (UV-KrCl). The results indicate that NOM strongly inhibits perchlorate removal by both ARP, because it competes with sulfite for UV light. Even though the absorbance of sulfite is much higher at 222 nm than that at 254 nm, the results indicate that a smaller amount of perchlorate was removed with the UV-KrCl lamp (222 nm) than with the UV-L lamp (254 nm). The results of this study will help to develop the proper way to apply the ARPs as practical water treatment processes.

  9. A Cosmic Variance Cookbook

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter

    2011-04-01

    Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is

  10. A COSMIC VARIANCE COOKBOOK

    SciTech Connect

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu

    2011-04-20

    Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z

  11. Tumor Volume Reduction Rate After Preoperative Chemoradiotherapy as a Prognostic Factor in Locally Advanced Rectal Cancer

    SciTech Connect

    Yeo, Seung-Gu; Kim, Dae Yong; Park, Ji Won; Oh, Jae Hwan; Kim, Sun Young; Chang, Hee Jin; Kim, Tae Hyun; Kim, Byung Chang; Sohn, Dae Kyung; Kim, Min Ju

    2012-02-01

    Purpose: To investigate the prognostic significance of tumor volume reduction rate (TVRR) after preoperative chemoradiotherapy (CRT) in locally advanced rectal cancer (LARC). Methods and Materials: In total, 430 primary LARC (cT3-4) patients who were treated with preoperative CRT and curative radical surgery between May 2002 and March 2008 were analyzed retrospectively. Pre- and post-CRT tumor volumes were measured using three-dimensional region-of-interest MR volumetry. Tumor volume reduction rate was determined using the equation TVRR (%) = (pre-CRT tumor volume - post-CRT tumor volume) Multiplication-Sign 100/pre-CRT tumor volume. The median follow-up period was 64 months (range, 27-99 months) for survivors. Endpoints were disease-free survival (DFS) and overall survival (OS). Results: The median TVRR was 70.2% (mean, 64.7% {+-} 22.6%; range, 0-100%). Downstaging (ypT0-2N0M0) occurred in 183 patients (42.6%). The 5-year DFS and OS rates were 77.7% and 86.3%, respectively. In the analysis that included pre-CRT and post-CRT tumor volumes and TVRR as continuous variables, only TVRR was an independent prognostic factor. Tumor volume reduction rate was categorized according to a cutoff value of 45% and included with clinicopathologic factors in the multivariate analysis; ypN status, circumferential resection margin, and TVRR were significant prognostic factors for both DFS and OS. Conclusions: Tumor volume reduction rate was a significant prognostic factor in LARC patients receiving preoperative CRT. Tumor volume reduction rate data may be useful for tailoring surgery and postoperative adjuvant therapy after preoperative CRT.

  12. Risk reduction activities for an F-1-based advanced booster for NASA's Space Launch System

    NASA Astrophysics Data System (ADS)

    Crocker, A. M.; Doering, K. B.; Cook, S. A.; Meadows, R. G.; Lariviere, B. W.; Bachtel, F. D.

    For NASA's Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) procurement, Dynetics, Inc. and Pratt & Whitney Rocketdyne (PWR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's goal of enabling competition on an affordable booster that meets the evolved capabilities of the SLS. During the ABEDRR effort, the Dynetics Team will apply state-of-the-art manufacturing and processing techniques to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. ABEDRR will use NASA test facilities to perform full-scale F-1 gas generator and powerpack hot-fire test campaigns for engine risk reduction. Dynetics will also fabricate and test a tank assembly to verify the structural design. The Dynetics Team is partnered with NASA through Space Act Agreements (SAAs) to maximize the expertise and capabilities applied to ABEDRR.

  13. Removal of PCBs in contaminated soils by means of chemical reduction and advanced oxidation processes.

    PubMed

    Rybnikova, V; Usman, M; Hanna, K

    2016-09-01

    Although the chemical reduction and advanced oxidation processes have been widely used individually, very few studies have assessed the combined reduction/oxidation approach for soil remediation. In the present study, experiments were performed in spiked sand and historically contaminated soil by using four synthetic nanoparticles (Fe(0), Fe/Ni, Fe3O4, Fe3 - x Ni x O4). These nanoparticles were tested firstly for reductive transformation of polychlorinated biphenyls (PCBs) and then employed as catalysts to promote chemical oxidation reactions (H2O2 or persulfate). Obtained results indicated that bimetallic nanoparticles Fe/Ni showed the highest efficiency in reduction of PCB28 and PCB118 in spiked sand (97 and 79 %, respectively), whereas magnetite (Fe3O4) exhibited a high catalytic stability during the combined reduction/oxidation approach. In chemical oxidation, persulfate showed higher PCB degradation extent than hydrogen peroxide. As expected, the degradation efficiency was found to be limited in historically contaminated soil, where only Fe(0) and Fe/Ni particles exhibited reductive capability towards PCBs (13 and 18 %). In oxidation step, the highest degradation extents were obtained in presence of Fe(0) and Fe/Ni (18-19 %). The increase in particle and oxidant doses improved the efficiency of treatment, but overall degradation extents did not exceed 30 %, suggesting that only a small part of PCBs in soil was available for reaction with catalyst and/or oxidant. The use of organic solvent or cyclodextrin to improve the PCB availability in soil did not enhance degradation efficiency, underscoring the strong impact of soil matrix. Moreover, a better PCB degradation was observed in sand spiked with extractable organic matter separated from contaminated soil. In contrast to fractions with higher particle size (250-500 and <500 μm), no PCB degradation was observed in the finest fraction (≤250 μm) having higher organic matter content. These findings

  14. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.

    2015-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the

  15. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.

    2015-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the

  16. Simultaneous nitrate reduction and acetaminophen oxidation using the continuous-flow chemical-less VUV process as an integrated advanced oxidation and reduction process.

    PubMed

    Moussavi, Gholamreza; Shekoohiyan, Sakine

    2016-11-15

    This work was aimed at investigating the performance of the continuous-flow VUV photoreactor as a novel chemical-less advanced process for simultaneously oxidizing acetaminophen (ACT) as a model of pharmaceuticals and reducing nitrate in a single reactor. Solution pH was an important parameter affecting the performance of VUV; the highest ACT oxidation and nitrate reduction attained at solution pH between 6 and 8. The ACT was oxidized mainly by HO while the aqueous electrons were the main working agents in the reduction of nitrate. The performance of VUV photoreactor improved with the increase of hydraulic retention time (HRT); the complete degradation of ACT and ∼99% reduction of nitrate with 100% N2 selectivity achieved at HRT of 80min. The VUV effluent concentrations of nitrite and ammonium at HRT of 80min were below the drinking water standards. The real water sample contaminated with the ACT and nitrate was efficiently treated in the VUV photoreactor. Therefore, the VUV photoreactor is a chemical-less advanced process in which both advanced oxidation and advanced reduction reactions are accomplished. This unique feature possesses VUV photoreactor as a promising method of treating water contaminated with both pharmaceutical and nitrate. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. The quantum Allan variance

    NASA Astrophysics Data System (ADS)

    Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał

    2016-08-01

    The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.

  18. Materials selection of surface coatings in an advanced size reduction facility. [For decommissioned stainless steel equipment

    SciTech Connect

    Briggs, J. L.; Younger, A. F.

    1980-06-02

    A materials selection test program was conducted to characterize optimum interior surface coatings for an advanced size reduction facility. The equipment to be processed by this facility consists of stainless steel apparatus (e.g., glove boxes, piping, and tanks) used for the chemical recovery of plutonium. Test results showed that a primary requirement for a satisfactory coating is ease of decontamination. A closely related concern is the resistance of paint films to nitric acid - plutonium environments. A vinyl copolymer base paint was the only coating, of eight paints tested, with properties that permitted satisfactory decontamination of plutonium and also performed equal to or better than the other paints in the chemical resistance, radiation stability, and impact tests.

  19. DEMONSTRATION OF AN ADVANCED INTEGRATED CONTROL SYSTEM FOR SIMULTANEOUS EMISSIONS REDUCTION

    SciTech Connect

    Suzanne Shea; Randhir Sehgal; Ilga Celmins; Andrew Maxson

    2002-02-01

    The primary objective of the project titled ''Demonstration of an Advanced Integrated Control System for Simultaneous Emissions Reduction'' was to demonstrate at proof-of-concept scale the use of an online software package, the ''Plant Environmental and Cost Optimization System'' (PECOS), to optimize the operation of coal-fired power plants by economically controlling all emissions simultaneously. It combines physical models, neural networks, and fuzzy logic control to provide both optimal least-cost boiler setpoints to the boiler operators in the control room, as well as optimal coal blending recommendations designed to reduce fuel costs and fuel-related derates. The goal of the project was to demonstrate that use of PECOS would enable coal-fired power plants to make more economic use of U.S. coals while reducing emissions.

  20. Noise Reduction Potential of Large, Over-the-Wing Mounted, Advanced Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2000-01-01

    As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Indeed, the noise goal for NASA's Aeronautics Enterprise calls for technologies that will help to provide a 20 EPNdB reduction relative to today's levels by the year 2022. Further, the large fan diameters of modem, increasingly higher bypass ratio engines pose a significant packaging and aircraft installation challenge. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large, ultra high bypass ratio cycles to continue, this over-the-wing design is believed to offer noise shielding benefits to observers on the ground. This paper describes the analytical certification noise predictions of a notional, long haul, commercial quadjet transport with advanced, high bypass engines mounted above the wing.

  1. Conversations across Meaning Variance

    ERIC Educational Resources Information Center

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  2. Conversations across Meaning Variance

    ERIC Educational Resources Information Center

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  3. Naive Analysis of Variance

    ERIC Educational Resources Information Center

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  4. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  5. Advanced subsonic Technology Noise Reduction Element Separate Flow Nozzle Tests for Engine Noise Reduction Sub-Element

    NASA Technical Reports Server (NTRS)

    Saiyed, Naseem H.

    2000-01-01

    Contents of this presentation include: Advanced Subsonic Technology (AST) goals and general information; Nozzle nomenclature; Nozzle schematics; Photograph of all baselines; Configurations tests and types of data acquired; and Engine cycle and plug geometry impact on EPNL.

  6. [Usefulness of reductive surgery for elderly advanced breast cancer with bone metastases - a case report].

    PubMed

    Sakurai, Kenichi; Fujisaki, Shigeru; Nagashima, Saki; Maeda, Tetsuyo; Tomita, Ryouichi; Suzuki, Shuhei; Hara, Yukiko; Hirano, Tomohiro; Enomoto, Katsuhisa; Amano, Sadao

    2014-11-01

    We report the case of an elderly, advanced breast cancer patient with multiple bone metastases. Breast reduction surgery was useful for this patient. The patient was an 81-year-old woman who had a breast lump. A core needle biopsy for breast cancer led to a diagnosis of invasive ductal carcinoma. The mucinous carcinoma was estrogen receptor (ER) nd progesterone receptor (PgR) positive and HER2/neu negative. Due to patient complications, it was not possible to treat with chemotherapy. The patient was administrated aromatase inhibitors (AI) and zoledronic acid hydrate. However, the AI treatment was not effective, and so she was administered toremifene. Toremifene treatment was effective for 6 months, after which she received fulvestrant. Fulvestrant treatment maintained stable disease (SD)for 14 months. After 14 months of fulvestrant treatment, serum concentrations of the tumor markers CA15-3, CEA, and BCA225 increased. We therefore decided to perform surgical breast reduction surgery. The pathological diagnosis from the surgically resected specimen was mucinous carcinoma, positive for ER and HER2, and negative for PgR. After surgery, serum concentrations of the tumor markers decreased. Following surgery, the patient was administrated lapatinib plus denosumab plus fulvestrant. The patient remains well, without bone metastases, 2 years and 6 months after surgery.

  7. EPA RREL`s mobile volume reduction unit advances soil washing at four Superfund sites

    SciTech Connect

    Gaire, R.; Borst, M.

    1994-12-31

    Research testing of the US. Environmental Protection Agency (EPA) Risk Reduction Engineering Laboratory`s (RREL) Volume Reduction Unit (VRU), produced data helping advance soil washing as a remedial technology for contaminated soils. Based on research at four Superfund sites, each with a different matrix of organic contaminants, EPA evaluated the soil technology and provided information to forecast realistic, full-scale remediation costs. Primarily a research tool, the VRU is RREL`s mobile test unit for investigating the breadth of this technology. During a Superfund Innovative Technology Evaluation (SITE) Demonstration at Escambia Wood Treating Company Site, Pensacola, FL, the VRU treated soil contaminated with pentachlorophenol (PCP) and polynuclear aromatic hydrocarbon-laden creosote (PAH). At Montana Pole and Treatment Plant Site, Butte, MT, the VRU treated soil containing PCP mixed with diesel oil (measured as total petroleum hydrocarbons) and a trace of dioxin. At Dover Air Force Base Site, Dover, DE, the VRU treated soil containing JP-4 jet fuel, measured as TPHC. At Sand Creek Site, Commerce City, CO, the feed soil at this site was contaminated with two pesticides: heptachlor and dieldrin. Less than 10 percent of these pesticides remained in the treated coarse soil fractions.

  8. Spectral Ambiguity of Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  9. Aeronautical fuel conservation possibilities for advanced subsonic transports. [application of aeronautical technology for drag and weight reduction

    NASA Technical Reports Server (NTRS)

    Braslow, A. L.; Whitehead, A. H., Jr.

    1973-01-01

    The anticipated growth of air transportation is in danger of being constrained by increased prices and insecure sources of petroleum-based fuel. Fuel-conservation possibilities attainable through the application of advances in aeronautical technology to aircraft design are identified with the intent of stimulating NASA R and T and systems-study activities in the various disciplinary areas. The material includes drag reduction; weight reduction; increased efficiency of main and auxiliary power systems; unconventional air transport of cargo; and operational changes.

  10. Reduction of antibiotic resistance genes in municipal wastewater effluent by advanced oxidation processes.

    PubMed

    Zhang, Yingying; Zhuang, Yao; Geng, Jinju; Ren, Hongqiang; Xu, Ke; Ding, Lili

    2016-04-15

    This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe(2+)/H2O2) and UV/H2O2 process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H2O2 process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H2O2 method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe(2+)/H2O2 had a molar ratio of 0.1 and a H2O2 concentration of 0.01molL(-1) with a pH of 3.0 and reaction time of 2h, 2.58-3.79 logs of target genes were removed. Under the initial effluent pH condition (pH=7.0), the removal was 2.26-3.35 logs. For the UV/H2O2 process, when the pH was 3.5 with a H2O2 concentration of 0.01molL(-1) accompanied by 30min of UV irradiation, all ARGs could achieve a reduction of 2.8-3.5 logs, and 1.55-2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H2O2 process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe(2+)/H2O2 molar ratios, H2O2 concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs.

  11. Conceptual design study of advanced acoustic composite nacelle. [for achieving reductions in community noise and operating expense

    NASA Technical Reports Server (NTRS)

    Goodall, R. G.; Painter, G. W.

    1975-01-01

    Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.

  12. Mechanisms of advanced oxidation processing on bentonite consumption reduction in foundry.

    PubMed

    Wang, Yujue; Cannon, Fred S; Komarneni, Sridhar; Voigt, Robert C; Furness, J C

    2005-10-01

    Prior full-scale foundry data have shown that when an advanced oxidation (AO) process is employed in a green sand system, the foundry needs 20-35% less makeup bentonite clay than when AO is not employed. We herein sought to explore the mechanism of this enhancement and found that AO water displaced the carbon coating of pyrolyzed carbonaceous condensates that otherwise accumulated on the bentonite surface. This was discerned by surface elemental analysis. This AO treatment restored the clay's capacity to adsorb methylene blue (as a measure of its surface charge) and water vapor (as a reflection of its hydrophilic character). In full-scale foundries, these parameters have been tied to improved green compressive strength and mold performance. When baghouse dust from a full-scale foundry received ultrasonic treatment in the lab, 25-30% of the dust classified into the clay-size fraction, whereas only 7% classified this way without ultrasonics. Also, the ultrasonication caused a size reduction of the bentonite due to the delamination of bentonite particles. The average bentonite particle diameter decreased from 4.6 to 3 microm, while the light-scattering surface area increased over 50% after 20 min ultrasonication. This would greatly improve the bonding efficiency of the bentonite according to the classical clay bonding mechanism. As a combined result of these mechanisms, the reduced bentonite consumption in full-scale foundries could be accounted for.

  13. Noise-Reduction Benefits Analyzed for Over-the-Wing-Mounted Advanced Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2000-01-01

    As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Also, the large fan diameters of modern engines with increasingly higher bypass ratios pose significant packaging and aircraft installation challenges. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large diameters and high bypass ratio cycles to continue, this approach allows the wing to shield much of the engine noise from people on the ground. The Propulsion Systems Analysis Office at the NASA Glenn Research Center at Lewis Field conducted independent analytical research to estimate the noise reduction potential of mounting advanced turbofan engines above the wing. Certification noise predictions were made for a notional long-haul commercial quadjet transport. A large quad was chosen because, even under current regulations, such aircraft sometimes experience difficulty in complying with certification noise requirements with a substantial margin. Also, because of its long wing chords, a large airplane would receive the greatest advantage of any noise-shielding benefit.

  14. Degradation of diclofenac by advanced oxidation and reduction processes: kinetic studies, degradation pathways and toxicity assessments.

    PubMed

    Yu, Hui; Nie, Er; Xu, Jun; Yan, Shuwen; Cooper, William J; Song, Weihua

    2013-04-01

    Many pharmaceutical compounds and metabolites are found in surface and ground waters suggesting their ineffective removal by conventional wastewater treatment technologies. Advanced oxidation/reduction processes (AO/RPs), which utilize free radical reactions to directly degrade chemical contaminants, are alternatives to traditional water treatment. This study reports the absolute rate constants for reaction of diclofenac sodium and model compound (2, 6-dichloraniline) with the two major AO/RP radicals: the hydroxyl radical (•OH) and hydrated electron (e(aq)(-)). The bimolecular reaction rate constants (M(-1) s(-1)) for diclofenac for •OH was (9.29 ± 0.11) × 10(9), and for e(-)(aq) was (1.53 ± 0.03) ×10(9). To provide a better understanding of the decomposition of the intermediate radicals produced by hydroxyl radical reactions, transient absorption spectra are observed from 1 - 250 μs. In addition, preliminary degradation mechanisms and major products were elucidated using (60)Co γ-irradiation and LC-MS. The toxicity of products was evaluated using luminescent bacteria. These data are required for both evaluating the potential use of AO/RPs for the destruction of these compounds and for studies of their fate and transport in surface waters where radical chemistry may be important in assessing their lifetime.

  15. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  16. Advanced sewage treatment process with excess sludge reduction and phosphorus recovery.

    PubMed

    Saktaywin, W; Tsuno, H; Nagare, H; Soyama, T; Weerapakkaroon, J

    2005-03-01

    An advanced sewage treatment process has been developed, in which excess sludge reduction by ozonation and phosphorus recovery by crystallization process are incorporated to a conventional anaerobic/oxic (A/O) phosphorus removal process. The mathematical model was developed to describe the mass balance principal at a steady state of this process. Sludge ozonation experiments were carried out to investigate solubilization characteristics of sludge and change in microbial activity by using sludge cultured with feed of synthetic sewage under A/O process. Phosphorus was solubilized by ozonation as well as organics, and acid-hydrolyzable phosphorus (AHP) was the most part of solubilized phosphorus for phosphorus accumulating organisms (PAOs) containing sludge. At solubilization of 30%, around 70% of sludge was inactivated by ozonation. The results based on these studies indicated that the proposed process configuration has potential to reduce the excess sludge production as well as to recover phosphorus in usable forms. The system performance results show that this system is practical, in which 30% of solubilization degree was achieved by ozonation. In this study, 30% of solubilization was achieved at 30 mgO(3)/gSS of ozone consumption.

  17. Advances of Ag, Cu, and Ag-Cu alloy nanoparticles synthesized via chemical reduction route

    NASA Astrophysics Data System (ADS)

    Tan, Kim Seah; Cheong, Kuan Yew

    2013-04-01

    Silver (Ag) and copper (Cu) nanoparticles have shown great potential in variety applications due to their excellent electrical and thermal properties resulting high demand in the market. Decreasing in size to nanometer scale has shown distinct improvement in these inherent properties due to larger surface-to-volume ratio. Ag and Cu nanoparticles are also shown higher surface reactivity, and therefore being used to improve interfacial and catalytic process. Their melting points have also dramatically decreased compared with bulk and thus can be processed at relatively low temperature. Besides, regularly alloying Ag into Cu to create Ag-Cu alloy nanoparticles could be used to improve fast oxidizing property of Cu nanoparticles. There are varieties methods have been reported on the synthesis of Ag, Cu, and Ag-Cu alloy nanoparticles. This review aims to cover chemical reduction means for synthesis of those nanoparticles. Advances of this technique utilizing different reagents namely metal salt precursors, reducing agents, and stabilizers, as well as their effects on respective nanoparticles have been systematically reviewed. Other parameters such as pH and temperature that have been considered as an important factor influencing the quality of those nanoparticles have also been reviewed thoroughly.

  18. Nominal analysis of "variance".

    PubMed

    Weiss, David J

    2009-08-01

    Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.

  19. 45 CFR 800.106 - Cost-sharing limits, advance payments of premium tax credits, and cost-sharing reductions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 3 2014-10-01 2014-10-01 false Cost-sharing limits, advance payments of premium tax credits, and cost-sharing reductions. 800.106 Section 800.106 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE OF PERSONNEL MANAGEMENT MULTI-STATE PLAN PROGRAM...

  20. 45 CFR 800.106 - Cost-sharing limits, advance payments of premium tax credits, and cost-sharing reductions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 3 2013-10-01 2013-10-01 false Cost-sharing limits, advance payments of premium tax credits, and cost-sharing reductions. 800.106 Section 800.106 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE OF PERSONNEL MANAGEMENT MULTI-STATE PLAN PROGRAM...

  1. Recent advances in reduction methods for nonlinear problems. [in structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1981-01-01

    Status and some recent developments in the application of reduction methods to nonlinear structural mechanics problems are summarized. The aspects of reduction methods discussed herein include: (1) selection of basis vectors in nonlinear static and dynamic problems, (2) application of reduction methods in nonlinear static analysis of structures subjected to prescribed edge displacements, and (3) use of reduction methods in conjunction with mixed finite element models. Numerical examples are presented to demonstrate the effectiveness of reduction methods in nonlinear problems. Also, a number of research areas which have high potential for application of reduction methods are identified.

  2. Bronchoscopic lung volume reduction by endobronchial valve in advanced emphysema: the first Asian report

    PubMed Central

    Park, Tai Sun; Hong, Yoonki; Lee, Jae Seung; Oh, Sang Young; Lee, Sang Min; Kim, Namkug; Seo, Joon Beom; Oh, Yeon-Mok; Lee, Sang-Do; Lee, Sei Won

    2015-01-01

    Purpose Endobronchial valve (EBV) therapy is increasingly being seen as a therapeutic option for advanced emphysema, but its clinical utility in Asian populations, who may have different phenotypes to other ethnic populations, has not been assessed. Patients and methods This prospective open-label single-arm clinical trial examined the clinical efficacy and the safety of EBV in 43 consecutive patients (mean age 68.4±7.5, forced expiratory volume in 1 second [FEV1] 24.5%±10.7% predicted, residual volume 208.7%±47.9% predicted) with severe emphysema with complete fissure and no collateral ventilation in a tertiary referral hospital in Korea. Results Compared to baseline, the patients exhibited significant improvements 6 months after EBV therapy in terms of FEV1 (from 0.68±0.26 L to 0.92±0.40 L; P<0.001), 6-minute walk distance (from 233.5±114.8 m to 299.6±87.5 m; P=0.012), modified Medical Research Council dyspnea scale (from 3.7±0.6 to 2.4±1.2; P<0.001), and St George’s Respiratory Questionnaire (from 65.59±13.07 to 53.76±11.40; P=0.028). Nine patients (20.9%) had a tuberculosis scar, but these scars did not affect target lobe volume reduction or pneumothorax frequency. Thirteen patients had adverse events, ten (23.3%) developed pneumothorax, which included one death due to tension pneumothorax. Conclusion EBV therapy was as effective and safe in Korean patients as it has been shown to be in Western countries. (Trial registration: ClinicalTrials.gov: NCT01869205). PMID:26251590

  3. Cosmology without cosmic variance

    SciTech Connect

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.

  4. Cosmology without cosmic variance

    DOE PAGES

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less

  5. Propulsion Noise Reduction Research in the NASA Advanced Air Transport Technology Project

    NASA Technical Reports Server (NTRS)

    Van Zante, Dale; Nark, Douglas; Fernandez, Hamilton

    2017-01-01

    The Aircraft Noise Reduction (ANR) sub-project is focused on the generation, development, and testing of component noise reduction technologies progressing toward the NASA far term noise goals while providing associated near and mid-term benefits. The ANR sub-project has efforts in airframe noise reduction, propulsion (including fan and core) noise reduction, acoustic liner technology, and propulsion airframe aeroacoustics for candidate conventional and unconventional aircraft configurations. The current suite of propulsion specific noise research areas is reviewed along with emerging facility and measurement capabilities. In the longer term, the changes in engine and aircraft configuration will influence the suite of technologies necessary to reduce noise in next generation systems.

  6. Reduction of Advanced Breast Cancer Stages at Subsequent Participation in Mammography Screening.

    PubMed

    Weigel, S; Heindel, W; Heidrich, J; Heidinger, O; Hense, H W

    2016-01-01

    The decline in advanced breast cancer stages is presumably the most relevant surrogate parameter in mammography screening. It represents the last step in the causal cascade that is expected to affect breast cancer-related mortality. To assess the effectiveness of population-based screening, we analyzed the 2-year incidence rates of advanced breast cancers between women participating in the initial and in the first subsequent round. The study included data from 19,563 initial and 18,034 subsequent examinations of one digital screening unit (2008 - 2010). Data on tumor stages, detected by screening or within the following interval of two years (2-year incidence), were provided by the epidemiological cancer registry. Rates of all and combined UICC stages 2, 3 and 4 (advanced stages) were reported for a two-year period. Proportions were tested for significance by using chi-square tests (p < 0.001). The 2-year incidence rate of all stages was significantly lower in participants in subsequent screening than in initial screening (0.85 vs. 1.29 per 100 women (%); p < 0.0001). A significantly lower 2-year incidence of advanced stages was observed for subsequent screening compared to initial screening (0.26 % vs. 0.48 %; p = 0.0007). Among women aged 50 to 59 years, the incidence of advanced stages was less clearly different (0.21 % vs. 0.35 %; p = 0.07) than in women aged 60 to 69 years (0.31 % vs. 0.70 %; p = 0.0008). During the change from prevalent to incident phase mammography screening, a program impact is seen by a lower 2-year incidence of advanced breast cancers within subsequent compared to initial participants, predominately in women aged 60 to 69 years. • The incidence of advanced tumor stages represents the most relevant surrogate parameter for screening effectiveness. • For the first time the 2-year incidence of advanced breast cancer stages after subsequent mammography screening was analyzed. • We observed a

  7. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  8. Recent advances in membrane bio-technologies for sludge reduction and treatment.

    PubMed

    Wang, Zhiwei; Yu, Hongguang; Ma, Jinxing; Zheng, Xiang; Wu, Zhichao

    2013-12-01

    This paper is designed to critically review the recent developments of membrane bio-technologies for sludge reduction and treatment by covering process fundamentals, performances (sludge reduction efficiency, membrane fouling, pollutant removal, etc.) and key operational parameters. The future perspectives of the hybrid membrane processes for sludge reduction and treatment are also discussed. For sludge reduction using membrane bioreactors (MBRs), literature review shows that biological maintenance metabolism, predation on bacteria, and uncoupling metabolism through using oxic-settling-anaerobic (OSA) process are promising ways that can be employed in full-scale applications. Development of control methods for worm proliferation is in great need of, and a good sludge reduction and MBR performance can be expected if worm growth is properly controlled. For lysis-cryptic sludge reduction method, improvement of oxidant dispersion and increase of the interaction with sludge cells can enhance the lysis efficiency. Green uncoupler development might be another research direction for uncoupling metabolism in MBRs. Aerobic hybrid membrane system can perform well for sludge thickening and digestion in small- and medium-sized wastewater treatment plants (WWTPs), and pilot-scale/full-scale applications have been reported. Anaerobic membrane digestion (AMD) process is a very competitive technology for sludge stabilization and digestion. Use of biogas recirculation for fouling control can be a powerful way to decrease the energy requirements for AMD process. Future research efforts should be dedicated to membrane preparation for high biomass applications, process optimization, and pilot-scale/full-scale tracking research in order to push forward the real and wide applications of the hybrid membrane systems for sludge minimization and treatment.

  9. Advanced experimental analysis of controls on microbial Fe(III) oxide reduction. First year progress report

    SciTech Connect

    Roden, E.E.; Urrutia, M.M.

    1997-07-01

    'The authors have made considerable progress toward a number of project objectives during the first several months of activity on the project. An exhaustive analysis was made of the growth rate and biomass yield (both derived from measurements of cell protein production) of two representative strains of Fe(III)-reducing bacteria (Shewanellaalga strain BrY and Geobactermetallireducens) growing with different forms of Fe(III) as an electron acceptor. These two fundamentally different types of Fe(III)-reducing bacteria (FeRB) showed comparable rates of Fe(III) reduction, cell growth, and biomass yield during reduction of soluble Fe(III)-citrate and solid-phase amorphous hydrous ferric oxide (HFO). Intrinsic growth rates of the two FeRB were strongly influenced by whether a soluble or a solid-phase source of Fe(III) was provided: growth rates on soluble Fe(III) were 10--20 times higher than those on solid-phase Fe(III) oxide. Intrinsic FeRB growth rates were comparable during reduction of HF0 and a synthetic crystalline Fe(III) oxide (goethite). A distinct lag phase for protein production was observed during the first several days of incubation in solid-phase Fe(III) oxide medium, even though Fe(III) reduction proceeded without any lag. No such lag between protein production and Fe(III) reduction was observed during growth with soluble Fe(III). This result suggested that protein synthesis coupled to solid-phase Fe(III) oxide reduction in batch culture requires an initial investment of energy (generated by Fe(III) reduction), which is probably needed for synthesis of materials (e.g. extracellular polysaccharides) required for attachment of the cells to oxide surfaces. This phenomenon may have important implications for modeling the growth of FeRB in subsurface sedimentary environments, where attachment and continued adhesion to solid-phase materials will be required for maintenance of Fe(III) reduction activity. Despite considerable differences in the rate and pattern

  10. Tungsten Contact and Line Resistance Reduction with Advanced Pulsed Nucleation Layer and Low Resistivity Tungsten Treatment

    NASA Astrophysics Data System (ADS)

    Chandrashekar, Anand; Chen, Feng; Lin, Jasmine; Humayun, Raashina; Wongsenakhum, Panya; Chang, Sean; Danek, Michal; Itou, Takamasa; Nakayama, Tomoo; Kariya, Atsushi; Kawaguchi, Masazumi; Hizume, Shunichi

    2010-09-01

    This paper describes electrical testing results of new tungsten chemical vapor deposition (CVD-W) process concepts that were developed to address the W contact and bitline scaling issues on 55 nm node devices. Contact resistance (Rc) measurements in complementary metal oxide semiconductor (CMOS) devices indicate that the new CVD-W process for sub-32 nm and beyond - consisting of an advanced pulsed nucleation layer (PNL) combined with low resistivity tungsten (LRW) initiation - produces a 20-30% drop in Rc for diffused NiSi contacts. From cross-sectional bright field and dark field transmission electron microscopy (TEM) analysis, such Rc improvement can be attributed to improved plugfill and larger in-feature W grain size with the advanced PNL+LRW process. More experiments that measured contact resistance for different feature sizes point to favorable Rc scaling with the advanced PNL+LRW process. Finally, 40% improvement in line resistance was observed with this process as tested on 55 nm embedded dynamic random access memory (DRAM) devices, confirming that the advanced PNL+LRW process can be an effective metallization solution for sub-32 nm devices.

  11. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    SciTech Connect

    Sorge, J.N.; Menzies, B.; Smouse, S.M.; Stallings, J.W.

    1995-09-01

    Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide NOx emissions from coal-fired boilers. The primary objective of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control/optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.

  12. Stereospecific Reductions of Delta4-Cholesten-3-one: An Advanced Organic Synthesis Project.

    ERIC Educational Resources Information Center

    Markgraf, J. Hodge; And Others

    1988-01-01

    Outlines a multistep project involving oxidation of cholesterol, isomerization of an enone, and reduction of delta-4-cholesten-3-one. Featured is the last stage in which the ring junction is set stereospecifically. Recommends two laboratory periods to complete the reaction. (ML)

  13. NMR Studies of Structure-Reactivity Relationships in Carbonyl Reduction: A Collaborative Advanced Laboratory Experiment

    ERIC Educational Resources Information Center

    Marincean, Simona; Smith, Sheila R.; Fritz, Michael; Lee, Byung Joo; Rizk, Zeinab

    2012-01-01

    An upper-division laboratory project has been developed as a collaborative investigation of a reaction routinely taught in organic chemistry courses: the reduction of carbonyl compounds by borohydride reagents. Determination of several trends regarding structure-activity relationship was possible because each student contributed his or her results…

  14. New advanced BARC and gap fill materials based on sublimate reduction for 193nm lithography

    NASA Astrophysics Data System (ADS)

    Takei, Satoshi; Shinjo, Tetsuya; Sakaida, Yasushi; Horiguchi, Yusuke; Nakajima, Yasuyuki

    2006-03-01

    Innovative technologies are required by integrated circuit manufacturers to create smaller feature sizes on chips. According to the semiconductor roadmap, feature sizes are slated to be as small as 45nm in 2007, and sizes will be continued to decrease in the following years. Suitable absorbance, Lower etch resistance, straight photoresist profiles, wider D.O.F., thinner film thickness, more effective barrier properties to reduce resist poisoning, and sublimate reduction for defect free coating are the major concerns to be taken into consideration for new BARC and gap fill materials. In this paper, the study of sublimate reduction in the new BARC and gap fill materials was investigated. The effect of sublimate reduction from BARC in bake process is related to decrease defect number. We will introduce new BARC and gap fill material consisted of the polymers with self crosslink-reaction system. In addition of sublimate reduction data, resist profiles and 130 nm via fill performance in via- first dual damascene process presented here would show clearly these materials are ready to be investigated into mass production of 90 nm node IC devices and beyond.

  15. NMR Studies of Structure-Reactivity Relationships in Carbonyl Reduction: A Collaborative Advanced Laboratory Experiment

    ERIC Educational Resources Information Center

    Marincean, Simona; Smith, Sheila R.; Fritz, Michael; Lee, Byung Joo; Rizk, Zeinab

    2012-01-01

    An upper-division laboratory project has been developed as a collaborative investigation of a reaction routinely taught in organic chemistry courses: the reduction of carbonyl compounds by borohydride reagents. Determination of several trends regarding structure-activity relationship was possible because each student contributed his or her results…

  16. Assessment of a novel lung sealant for performing endoscopic volume reduction therapy in patients with advanced emphysema.

    PubMed

    Herth, Felix J F; Eberhardt, Ralf; Ingenito, Edward P; Gompelmann, Daniela

    2011-05-01

    AeriSeal Emphysematous Lung Sealant is a novel endoscopic lung-volume reduction therapy designed to reduce hyperinflation and improve pulmonary function and quality of life in patients with advanced emphysema. The device is administered to the subsegmental bronchus via a catheter as a 20 ml volume of liquid-foam. It flows into the peripheral airways and alveoli where it polymerizes and functions as a tissue glue, forming a film of material on the lung surface that seals the target region to cause durable absorption atelectasis. The AeriSeal System received CE mark approval for the treatment of patients with advanced upper lobe predominant and homogeneous emphysema based upon favorable results from clinical studies, and is commercially available in Europe. Patient and treatment site selection algorithms have been developed to simplify product use and optimize outcomes. This manuscript summarizes how the device is used, its mechanism of action and clinical trial results supporting its safety and efficacy.

  17. Reduction of trihalomethane precursors of dissolved organic matter in the secondary effluent by advanced treatment processes.

    PubMed

    Wei, Liang-Liang; Zhao, Qing-Liang; Xue, Shuang; Chang, Chein-Chi; Tang, Feng; Liang, Guan-Liang; Jia, Ting

    2009-09-30

    Wastewater effluent collected from the Wenchang Wastewater Treatment Plant (Harbin, China) was used as source water for advanced treatment and reclamation. Since dissolved organic matter (DOM) in the secondary effluent contains a high concentration of trihalomethanes (THMs) precursors, several processes of advanced treatments including granular activated carbon (GAC) adsorption, sand column biodegradation, horizontal subsurface flow wetland (HSFW) treatment, laboratory-scale soil aquifer treatment (SAT) and GAC+SAT were used in this study to compare and differentiate the removal mechanisms of DOM. DOM in the secondary effluent and the treated effluents was fractionated into five classes using XAD resins: hydrophobic acid (HPO-A), hydrophobic neutral (HPO-N), transphilic acid (TPI-A), transphilic neutral (TPI-N), and hydrophilic fraction (HPI). Results showed that HPO-A and HPI were two main fractions of the DOM in the secondary effluent, accounting for 30.0% and 45.5% of the bulk DOM, respectively. HPO-A exhibited higher trihalomethane formation potential (THMFP) and specific THMFP (STHMFP) than HPI during the chlorination process. The order of the dissolved organic carbon (DOC) removal with respect to different advanced treatments was observed to be GAC+SAT>SAT>GAC>sand column>HSFW. As for the DOM removal mechanisms, the advanced treatment processes of GAC adsorption, SAT and GAC+SAT tended to adsorb more HPO-A, HPO-N and TPI-A and could reduce the aromaticity of those DOM fractions efficiently. Correspondingly, the advanced treatment processes of sand column, SAT, HSFW and GAC+SAT removed more HPI and TPI-N through biodegradation and each of the DOM fractions had an increased aromaticity. The removal order of the THMs precursor by the advanced treatment processes was GAC+SAT>GAC>SAT>sand column>HSFW. The adsorption reduced the STHMFP of the DOM fractions effectively, whereas the biodegradation mechanism of the treatments (sand column, SAT, GAC+SAT and HSFW

  18. Current advances of integrated processes combining chemical absorption and biological reduction for NO x removal from flue gas.

    PubMed

    Zhang, Shihan; Chen, Han; Xia, Yinfeng; Liu, Nan; Lu, Bi-Hong; Li, Wei

    2014-10-01

    Anthropogenic nitrogen oxides (NO x ) emitted from the fossil-fuel-fired power plants cause adverse environmental issues such as acid rain, urban ozone smoke, and photochemical smog. A novel chemical absorption-biological reduction (CABR) integrated process under development is regarded as a promising alternative to the conventional selective catalytic reduction processes for NO x removal from the flue gas because it is economic and environmentally friendly. CABR process employs ferrous ethylenediaminetetraacetate [Fe(II)EDTA] as a solvent to absorb the NO x following microbial denitrification of NO x to harmless nitrogen gas. Meanwhile, the absorbent Fe(II)EDTA is biologically regenerated to sustain the adequate NO x removal. Compared with conventional denitrification process, CABR not only enhances the mass transfer of NO from gas to liquid phase but also minimize the impact of oxygen on the microorganisms. This review provides the current advances of the development of the CABR process for NO x removal from the flue gas.

  19. Effect of Two Advanced Noise Reduction Technologies on the Aerodynamic Performance of an Ultra High Bypass Ratio Fan

    NASA Technical Reports Server (NTRS)

    Hughes, Christoper E.; Gazzaniga, John A.

    2013-01-01

    A wind tunnel experiment was conducted in the NASA Glenn Research Center anechoic 9- by 15-Foot Low-Speed Wind Tunnel to investigate two new advanced noise reduction technologies in support of the NASA Fundamental Aeronautics Program Subsonic Fixed Wing Project. The goal of the experiment was to demonstrate the noise reduction potential and effect on fan model performance of the two noise reduction technologies in a scale model Ultra-High Bypass turbofan at simulated takeoff and approach aircraft flight speeds. The two novel noise reduction technologies are called Over-the-Rotor acoustic treatment and Soft Vanes. Both technologies were aimed at modifying the local noise source mechanisms of the fan tip vortex/fan case interaction and the rotor wake-stator interaction. For the Over-the-Rotor acoustic treatment, two noise reduction configurations were investigated. The results showed that the two noise reduction technologies, Over-the-Rotor and Soft Vanes, were able to reduce the noise level of the fan model, but the Over-the-Rotor configurations had a significant negative impact on the fan aerodynamic performance; the loss in fan aerodynamic efficiency was between 2.75 to 8.75 percent, depending on configuration, compared to the conventional solid baseline fan case rubstrip also tested. Performance results with the Soft Vanes showed that there was no measurable change in the corrected fan thrust and a 1.8 percent loss in corrected stator vane thrust, which resulted in a total net thrust loss of approximately 0.5 percent compared with the baseline reference stator vane set.

  20. External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Niedra, Janis M.; Geng, Steven M.

    2013-01-01

    Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.

  1. Research requirements for development of advanced-technology helicopter transmissions. [reduction of maintenance costs

    NASA Technical Reports Server (NTRS)

    Lemanski, A. J.

    1976-01-01

    Helicopter drive-system technology which would result in the largest benefit in direct maintenance cost when applied to civil helicopters in the 1980 timeframe was developed. A prototype baseline drive system based on 1975 technology provided the basis for comparison against the proposed advanced technology in order to determine the potential for each area recommended for improvement. A specific design example of an advanced-technology main transmission is presented to define improvements for maintainability, weight, producibility, reliability, noise, vibration, and diagnostics. Projections of the technology achievable in the 1980 timeframe are presented. Based on this data, the technologies with the highest payoff (lowest direct maintenance cost) for civil-helicopter drive systems are identified.

  2. Energy Saving Melting and Revert Reduction Technology (Energy SMARRT): Manufacturing Advanced Engineered Components Using Lost Foam Casting Technology

    SciTech Connect

    Littleton, Harry; Griffin, John

    2011-07-31

    This project was a subtask of Energy Saving Melting and Revert Reduction Technology (Energy SMARRT) Program. Through this project, technologies, such as computer modeling, pattern quality control, casting quality control and marketing tools, were developed to advance the Lost Foam Casting process application and provide greater energy savings. These technologies have improved (1) production efficiency, (2) mechanical properties, and (3) marketability of lost foam castings. All three reduce energy consumption in the metals casting industry. This report summarizes the work done on all tasks in the period of January 1, 2004 through June 30, 2011. Current (2011) annual energy saving estimates based on commercial introduction in 2011 and a market penetration of 97% by 2020 is 5.02 trillion BTU's/year and 6.46 trillion BTU's/year with 100% market penetration by 2023. Along with these energy savings, reduction of scrap and improvement in casting yield will result in a reduction of the environmental emissions associated with the melting and pouring of the metal which will be saved as a result of this technology. The average annual estimate of CO2 reduction per year through 2020 is 0.03 Million Metric Tons of Carbon Equivalent (MM TCE).

  3. Advances in projection of climate change impacts using supervised nonlinear dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali

    2017-02-01

    One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.

  4. ADVANCEMENT OF NUCLEIC ACID-BASED TOOLS FOR MONITORING IN SITU REDUCTIVE DECHLORINATION

    SciTech Connect

    Vangelas, K; ELIZABETH EDWARDS, E; FRANK LOFFLER, F; Brian02 Looney, B

    2006-11-17

    Regulatory protocols generally recognize that destructive processes are the most effective mechanisms that support natural attenuation of chlorinated solvents. In many cases, these destructive processes will be biological processes and, for chlorinated compounds, will often be reductive processes that occur under anaerobic conditions. The existing EPA guidance (EPA, 1998) provides a list of parameters that provide indirect evidence of reductive dechlorination processes. In an effort to gather direct evidence of these processes, scientists have identified key microorganisms and are currently developing tools to measure the abundance and activity of these organisms in subsurface systems. Drs. Edwards and Luffler are two recognized leaders in this field. The research described herein continues their development efforts to provide a suite of tools to enable direct measures of biological processes related to the reductive dechlorination of TCE and PCE. This study investigated the strengths and weaknesses of the 16S rRNA gene-based approach to characterizing the natural attenuation capabilities in samples. The results suggested that an approach based solely on 16S rRNA may not provide sufficient information to document the natural attenuation capabilities in a system because it does not distinguish between strains of organisms that have different biodegradation capabilities. The results of the investigations provided evidence that tools focusing on relevant enzymes for functionally desired characteristics may be useful adjuncts to the 16SrRNA methods.

  5. Small Drinking Water System Variances

    EPA Pesticide Factsheets

    Small system variances allow a small system to install and maintain technology that can remove a contaminant to the maximum extent that is affordable and protective of public health in lieu of technology that can achieve compliance with the regulation.

  6. Advanced reburning for reduction of NO sub x emissions in combustion systems

    SciTech Connect

    Seeker, W.R.; Chen, S.L.; Kramlich, J.C.

    1992-08-18

    This patent describes a process for reducing nitrogen oxides in combustion emission systems. It comprises mixing a reburning fuel with combustion emissions in a gaseous reburning zone such that the reburning zone is substantially oxygen deficient; passing the resulting mixture of reburning fuel and combustion emissions into a first burnout zone; introducing a first stream of burnout air into the first burnout zone; advancing the resulting mixture from the first burnout zone to a second burnout zone; and introducing a second stream of burnout air into the second burnout zone.

  7. Armor Possibilities and Radiographic Blur Reduction for The Advanced Hydrotest Facility

    SciTech Connect

    Hackett, M

    2001-09-01

    Currently at Lawrence Livermore National Laboratory (LLNL) a composite firing vessel is under development for the Advanced Hydrotest Facility (AHF) to study high explosives. This vessel requires a shrapnel mitigating layer to protect the vessel during experiments. The primary purpose of this layer is to protect the vessel, yet the material must be transparent to proton radiographs. Presented here are methods available to collect data needed before selection, along with a comparison tool developed to aid in choosing a material that offers the best of ballistic protection while allowing for clear radiographs.

  8. Advances in earthquake and tsunami sciences and disaster risk reduction since the 2004 Indian ocean tsunami

    NASA Astrophysics Data System (ADS)

    Satake, Kenji

    2014-12-01

    The December 2004 Indian Ocean tsunami was the worst tsunami disaster in the world's history with more than 200,000 casualties. This disaster was attributed to giant size (magnitude M ~ 9, source length >1000 km) of the earthquake, lacks of expectation of such an earthquake, tsunami warning system, knowledge and preparedness for tsunamis in the Indian Ocean countries. In the last ten years, seismology and tsunami sciences as well as tsunami disaster risk reduction have significantly developed. Progress in seismology includes implementation of earthquake early warning, real-time estimation of earthquake source parameters and tsunami potential, paleoseismological studies on past earthquakes and tsunamis, studies of probable maximum size, recurrence variability, and long-term forecast of large earthquakes in subduction zones. Progress in tsunami science includes accurate modeling of tsunami source such as contribution of horizontal components or "tsunami earthquakes", development of new types of offshore and deep ocean tsunami observation systems such as GPS buoys or bottom pressure gauges, deployments of DART gauges in the Pacific and other oceans, improvements in tsunami propagation modeling, and real-time inversion or data assimilation for the tsunami warning. These developments have been utilized for tsunami disaster reduction in the forms of tsunami early warning systems, tsunami hazard maps, and probabilistic tsunami hazard assessments. Some of the above scientific developments helped to reveal the source characteristics of the 2011 Tohoku earthquake, which caused devastating tsunami damage in Japan and Fukushima Dai-ichi Nuclear Power Station accident. Toward tsunami disaster risk reduction, interdisciplinary and trans-disciplinary approaches are needed for scientists with other stakeholders.

  9. Advanced Glycation End Products in Foods and a Practical Guide to Their Reduction in the Diet

    PubMed Central

    URIBARRI, JAIME; WOODRUFF, SANDRA; GOODMAN, SUSAN; CAI, WEIJING; CHEN, XUE; PYZIK, RENATA; YONG, ANGIE; STRIKER, GARY E.; VLASSARA, HELEN

    2013-01-01

    Modern diets are largely heat-processed and as a result contain high levels of advanced glycation end products (AGEs). Dietary advanced glycation end products (dAGEs) are known to contribute to increased oxidant stress and inflammation, which are linked to the recent epidemics of diabetes and cardiovascular disease. This report significantly expands the available dAGE database, validates the dAGE testing methodology, compares cooking procedures and inhibitory agents on new dAGE formation, and introduces practical approaches for reducing dAGE consumption in daily life. Based on the findings, dry heat promotes new dAGE formation by >10- to 100-fold above the uncooked state across food categories. Animal-derived foods that are high in fat and protein are generally AGE-rich and prone to new AGE formation during cooking. In contrast, carbohydrate-rich foods such as vegetables, fruits, whole grains, and milk contain relatively few AGEs, even after cooking. The formation of new dAGEs during cooking was prevented by the AGE inhibitory compound aminoguanidine and significantly reduced by cooking with moist heat, using shorter cooking times, cooking at lower temperatures, and by use of acidic ingredients such as lemon juice or vinegar. The new dAGE database provides a valuable instrument for estimating dAGE intake and for guiding food choices to reduce dAGE intake. PMID:20497781

  10. Significance of volume-reduction surgery for far-advanced gastric cancer during treatment with novel anticancer agents.

    PubMed

    Yamamoto, Yuji; Yoshikawa, Takaki; Morinaga, Souichirou; Kasahara, Akira; Yoneyama, Katsuya; Osaragi, Tomohiko; Matsuura, Hitoshi; Yoshida, Tatsuya; Hasegawa, Shinichi

    2009-06-01

    We retrospectively assessed the survival benefit of novel anticancer agents (NACA) after volume-reduction surgery for far-advanced gastric cancer (FAGC). From 1995 to 2005, 41 patients with FAGC underwent chemotherapy after volume-reduction surgery. Those treated since 2000 who received NACA were referred to as group A, and those treated before 2000, who received anticancer agents other than NACA, were referred to as group B. In addition, 21 patients with unresectable gastric cancer treated since 2000 who received NACA were referred to as group C. We investigated the significance of volume-reduction surgery during treatment with NACA. The median survival time (MST) was significantly prolonged in group A (626 days) compared to group B (364 days; P = 0.0156). Multivariate analysis showed that having one noncurative factor (NCF), and the use of NACA, were factors that contributed to survival time. Comparison between the subgroup of group A that had one NCF and the subgroup that had two or more NCFs revealed MSTs of 700 days and 180 days, respectively, with a significantly longer MST among the patients with one NCF (P = 0.0021). In addition, no difference from the MST of 333 days in group C was seen among the group A patients with two or more NCFs. The postoperative survival time of patients with one NCF was prolonged by the advent of NACA, but no significant prolongation was observed in the patients with two or more NCFs.

  11. Aerodynamic performance investigation of advanced mechanical suppressor and ejector nozzle concepts for jet noise reduction

    NASA Technical Reports Server (NTRS)

    Wagenknecht, C. D.; Bediako, E. D.

    1985-01-01

    Advanced Supersonic Transport jet noise may be reduced to Federal Air Regulation limits if recommended refinements to a recently developed ejector shroud exhaust system are successfully carried out. A two-part program consisting of a design study and a subscale model wind tunnel test effort conducted to define an acoustically treated ejector shroud exhaust system for supersonic transport application is described. Coannular, 20-chute, and ejector shroud exhaust systems were evaluated. Program results were used in a mission analysis study to determine aircraft takeoff gross weight to perform a nominal design mission, under Federal Aviation Regulation (1969), Part 36, Stage 3 noise constraints. Mission trade study results confirmed that the ejector shroud was the best of the three exhaust systems studied with a significant takeoff gross weight advantage over the 20-chute suppressor nozzle which was the second best.

  12. Wind-tunnel studies of advanced cargo aircraft concepts. [leading edge vortex flaps for drag reduction

    NASA Technical Reports Server (NTRS)

    Rao, D. M.; Goglia, G. L.

    1981-01-01

    Accomplishments in vortex flap research are summarized. A singular feature of the vortex flap is that, throughout the range of angle of attack range, the flow type remains qualitatively unchanged. Accordingly, no large or sudden change in the aerodynamic characteristics, as happens when forcibly maintained attached flow suddenly reverts to separation, will occur with the vortex flap. Typical wind tunnel test data are presented which show the drag reduction potential of the vortex flap concept applied to a supersonic cruise airplane configuration. The new technology offers a means of aerodynamically augmenting roll-control effectiveness on slender wings at higher angles of attack by manipulating the vortex flow generated from leading edge separation. The proposed manipulator takes the form of a flap hinged at or close to the leading edge, normally retracted flush with the wing upper surface to conform to the airfoil shape.

  13. Cobalt diselenide nanoparticles embedded within porous carbon polyhedra as advanced electrocatalyst for oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Wu, Renbing; Xue, Yanhong; Liu, Bo; Zhou, Kun; Wei, Jun; Chan, Siew Hwa

    2016-10-01

    Highly efficient and cost-effective electrocatalyst for the oxygen reduction reaction (ORR) is crucial for a variety of renewable energy applications. Herein, strongly coupled hybrid composites composed of cobalt diselenide (CoSe2) nanoparticles embedded within graphitic carbon polyhedra (GCP) as high-performance ORR catalyst have been rationally designed and synthesized. The catalyst is fabricated by a convenient method, which involves the simultaneous pyrolysis and selenization of preformed Co-based zeolitic imidazolate framework (ZIF-67). Benefiting from the unique structural features, the resulting CoSe2/GCP hybrid catalyst shows high stability and excellent electrocatalytic activity towards ORR (the onset and half-wave potentials are 0.935 and 0.806 V vs. RHE, respectively), which is superior to the state-of-the-art commercial Pt/C catalyst (0.912 and 0.781 V vs. RHE, respectively).

  14. Vibration reduction in advanced composite turbo-fan blades using embedded damping materials

    NASA Astrophysics Data System (ADS)

    Kosmatka, John B.; Lapid, Alex J.; Mehmed, Oral

    1996-05-01

    A preliminary design and analysis procedure for locating an integral damping treatment in composite turbo-propeller blades has been developed. This finite element based approach, which is based upon the modal strain energy method, is used to size and locate the damping material patch so that the damping (loss factor) is maximized in a particular mode while minimizing the overall stiffness loss (minimal reductions in the structural natural frequencies). Numerical results are presented to illustrate the variation in the natural frequencies and damping levels as a result of stacking sequence, integral damping patch size and location, and border materials. Experimental studies were presented using flat and pretwisted (30 degrees) integrally damped composite blade-like structures to show how a small internal damping patch can significantly increase the damping levels without sacrificing structural integrity. Moreover, the use of a soft border material around the patch can greatly increase the structural damping levels.

  15. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  16. ADVANCED BYPRODUCT RECOVERY: DIRECT CATALYTIC REDUCTION OF SO2 TO ELEMENTAL SULFUR

    SciTech Connect

    Robert S. Weber

    1999-05-01

    Arthur D. Little, Inc., together with its commercialization partner, Engelhard Corporation, and its university partner Tufts, investigated a single-step process for direct, catalytic reduction of sulfur dioxide from regenerable flue gas desulfurization processes to the more valuable elemental sulfur by-product. This development built on recently demonstrated SO{sub 2}-reduction catalyst performance at Tufts University on a DOE-sponsored program and is, in principle, applicable to processing of regenerator off-gases from all regenerable SO{sub 2}-control processes. In this program, laboratory-scale catalyst optimization work at Tufts was combined with supported catalyst formulation work at Engelhard, bench-scale supported catalyst testing at Arthur D. Little and market assessments, also by Arthur D. Little. Objectives included identification and performance evaluation of a catalyst which is robust and flexible with regard to choice of reducing gas. The catalyst formulation was improved significantly over the course of this work owing to the identification of a number of underlying phenomena that tended to reduce catalyst selectivity. The most promising catalysts discovered in the bench-scale tests at Tufts were transformed into monolith-supported catalysts at Engelhard. These catalyst samples were tested at larger scale at Arthur D. Little, where the laboratory-scale results were confirmed, namely that the catalysts do effectively reduce sulfur dioxide to elemental sulfur when operated under appropriate levels of conversion and in conditions that do not contain too much water or hydrogen. Ways to overcome those limitations were suggested by the laboratory results. Nonetheless, at the end of Phase I, the catalysts did not exhibit the very stringent levels of activity or selectivity that would have permitted ready scale-up to pilot or commercial operation. Therefore, we chose not to pursue Phase II of this work which would have included further bench-scale testing

  17. Advanced turbo-prop airplane interior noise reduction-source definition

    NASA Technical Reports Server (NTRS)

    Magliozzi, B.; Brooks, B. M.

    1979-01-01

    Acoustic pressure amplitudes and phases were measured in model scale on the surface of a rigid semicylinder mounted in an acoustically treated wind tunnel near a prop-fan (an advanced turboprop with many swept blades) model. Operating conditions during the test simulated those of a prop-fan at 0.8 Mach number cruise. Acoustic pressure amplitude and phase contours were defined on the semicylinder surface. Measurements obtained without the semi-cylinder in place were used to establish the magnitude of pressure doubling for an aircraft fuselage located near a prop-fan. Pressure doubling effects were found to be 6dB at 90 deg incidence decreasing to no effect at grazing incidence. Comparisons of measurements with predictions made using a recently developed prop-fan noise prediction theory which includes linear and non-linear source terms showed good agreement in phase and in peak noise amplitude. Predictions of noise amplitude and phase contours, including pressure doubling effects derived from test, are included for a full scale prop-fan installation.

  18. Boundary layer drag reduction research hypotheses derived from bio-inspired surface and recent advanced applications.

    PubMed

    Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe

    2015-12-01

    Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering.

  19. [Successful treatment of advanced gastric cancer (Borrmann 1 type) with FTP chemotherapy after reduction surgery].

    PubMed

    Nomura, N; Yamada, A; Saitou, F; Tsuzawa, T; Yamashita, I; Sakakibara, T; Shimizu, T; Sakamoto, T; Karaki, Y; Tazawa, K

    1994-05-01

    A 54-year-old man was diagnosed with Borr 1 type gastric cancer, located just below ECJ with some paraaortic lymph node metastase, during treatment of diabetes mellitus at another hospital. He underwent spleno-total gastrectomy for reduction. The metastatic lymph nodes of the para-aorta were not resected, so the surgery was considered palliative. We administered FTP chemotherapy (CDDP 110 mg/day 1, 5-FU 1,200 mg/day 1-5, THP-ADM 30 mg/day 1) 5 times following surgery. The metastatic lymph nodes were remarkably decreased in size by the initial treatment. The decrement was 52.4% after the initial treatment (PR). After the 4th treatment, there were no lymph nodes detected (CR). After the 5th treatment, CR continued. The PR period was considered to be 5 months, and that of CR 4 months. The patient has no renal or heart dysfunction, and no suppression of bone marrow. His quality of life is satisfactory, and he continues to work as prior to surgery. FTP chemotherapy is considered a successful regimen for postoperative chemotherapy.

  20. 2014 U.S. Offshore Wind Market Report: Industry Trends, Technology Advancement, and Cost Reduction

    SciTech Connect

    Smith, Aaron; Stehly, Tyler; Walter Musial

    2015-09-29

    2015 has been an exciting year for the U.S. offshore wind market. After more than 15 years of development work, the U.S. has finally hit a crucial milestone; Deepwater Wind began construction on the 30 MW Block Island Wind Farm (BIWF) in April. A number of other promising projects, however, have run into economic, legal, and political headwinds, generating much speculation about the future of the industry. This slow, and somewhat painful, start to the industry is not without precedent; each country in northern Europe began with pilot-scale, proof-of-concept projects before eventually moving to larger commercial scale installations. Now, after more than a decade of commercial experience, the European industry is set to achieve a new deployment record, with more than 4 GW expected to be commissioned in 2015, with demonstrable progress towards industry-wide cost reduction goals. DWW is leveraging 25 years of European deployment experience; the BIWF combines state-of-the-art technologies such as the Alstom 6 MW turbine with U.S. fabrication and installation competencies. The successful deployment of the BIWF will provide a concrete showcase that will illustrate the potential of offshore wind to contribute to state, regional, and federal goals for clean, reliable power and lasting economic development. It is expected that this initial project will launch the U.S. industry into a phase of commercial development that will position offshore wind to contribute significantly to the electric systems in coastal states by 2030.

  1. Advanced noise reduction in placental ultrasound imaging using CPU and GPU: a comparative study

    NASA Astrophysics Data System (ADS)

    Zombori, G.; Ryan, J.; McAuliffe, F.; Rainford, L.; Moran, M.; Brennan, P.

    2010-03-01

    This paper presents a comparison of different implementations of 3D anisotropic diffusion speckle noise reduction technique on ultrasound images. In this project we are developing a novel volumetric calcification assessment metric for the placenta, and providing a software tool for this purpose. The tool can also automatically segment and visualize (in 3D) ultrasound data. One of the first steps when developing such a tool is to find a fast and efficient way to eliminate speckle noise. Previous works on this topic by Duan, Q. [1] and Sun, Q. [2] have proven that the 3D noise reducing anisotropic diffusion (3D SRAD) method shows exceptional performance in enhancing ultrasound images for object segmentation. Therefore we have implemented this method in our software application and performed a comparative study on the different variants in terms of performance and computation time. To increase processing speed it was necessary to utilize the full potential of current state of the art Graphics Processing Units (GPUs). Our 3D datasets are represented in a spherical volume format. With the aim of 2D slice visualization and segmentation, a "scan conversion" or "slice-reconstruction" step is needed, which includes coordinate transformation from spherical to Cartesian, re-sampling of the volume and interpolation. Combining the noise filtering and slice reconstruction in one process on the GPU, we can achieve close to real-time operation on high quality data sets without the need for down-sampling or reducing image quality. For the GPU programming OpenCL language was used. Therefore the presented solution is fully portable.

  2. Is lung volume reduction surgery effective in the treatment of advanced emphysema?

    PubMed

    Zahid, Imran; Sharif, Sumera; Routledge, Tom; Scarci, Marco

    2011-03-01

    A best evidence topic in thoracic surgery was written according to a structured protocol. The question addressed was whether lung volume reduction surgery (LVRS) might be superior to medical treatment in the management of patients with severe emphysema. Overall 497 papers were found using the reported search, of which 12 represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results are tabulated. We conclude that LVRS produces superior patient outcomes compared to medical treatment in terms of exercise capacity, lung function, quality of life and long-term (>1 year postoperative) survival. A large proportion of the best evidence on this topic is based on analysis of the National Emphysema Treatment Trial (NETT). Seven studies compared LVRS to medical treatment alone (MTA) using data generated by the NETT trial. They found higher quality of life scores (45.3 vs. 27.5, P<0.001), improved maximum ventilation (32.8 vs. 29.6 l/min, P=0.001) and lower exacerbation rate per person-year (0.27 vs. 0.37%, P=0.0005) with LVRS than MTA. Mortality rates for LVRS were greater up to one year (P=0.01), equivalent by three years (P=0.15) and lower after four years (P=0.06) postoperative compared to MTA. Patients with upper-lobe-predominant disease and low exercise capacity (0.36 vs. 0.54, P=0.003) benefited the most from undergoing LVRS rather than MTA in terms of probability of death at five years compared to patients with non-upper-lobe disease (0.38 vs. 0.45, P=0.03) or upper-lobe-disease with high exercise capacity (0.33 vs. 0.38, P=0.32). Five studies compared LVRS to MTA using data independent from the NETT trial. They found greater six-minute walking distances (433 vs. 300 m, P<0.002), improved total lung capacity (18.8 vs. 7.9% predicted, P<0.02) and quality of life scores (47 vs. 23.2, P<0.05) with LVRS compared to MTA. Even though LVRS has a much

  3. Assessment of the Noise Reduction Potential of Advanced Subsonic Transport Concepts for NASA's Environmentally Responsible Aviation Project

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Burley, Casey L.; Nickol, Craig L.

    2016-01-01

    Aircraft system noise is predicted for a portfolio of NASA advanced concepts with 2025 entry-into-service technology assumptions. The subsonic transport concepts include tube-and-wing configurations with engines mounted under the wing, over the wing nacelle integration, and a double deck fuselage with engines at a mid-fuselage location. Also included are hybrid wing body aircraft with engines upstream of the fuselage trailing edge. Both advanced direct drive engines and geared turbofan engines are modeled. Recent acoustic experimental information was utilized in the prediction for several key technologies. The 301-passenger class hybrid wing body with geared ultra high bypass engines is assessed at 40.3 EPNLdB cumulative below the Stage 4 certification level. Other hybrid wing body and unconventional tube-and-wing configurations reach levels of 33 EPNLdB or more below the certification level. Many factors contribute to the system level result; however, the hybrid wing body in the 301-passenger class, as compared to a tubeand- wing with conventional engine under wing installation, has 11.9 EPNLdB of noise reduction due to replacing reflection with acoustic shielding of engine noise sources. Therefore, the propulsion airframe aeroacoustic interaction effects clearly differentiate the unconventional configurations that approach levels close to or exceed the 42 EPNLdB goal.

  4. Analysis of Variance: Variably Complex

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…

  5. VPSim: Variance propagation by simulation

    SciTech Connect

    Burr, T.; Coulter, C.A.; Prommel, J.

    1997-12-01

    One of the fundamental concepts in a materials control and accountability system for nuclear safeguards is the materials balance (MB). All transfers into and out of a material balance area are measured, as are the beginning and ending inventories. The resulting MB measures the material loss, MB = T{sub in} + I{sub B} {minus} T{sub out} {minus} I{sub E}. To interpret the MB, the authors must estimate its measurement error standard deviation, {sigma}{sub MB}. When feasible, they use a method usually known as propagation of variance (POV) to estimate {sigma}{sub MB}. The application of POV for estimating the measurement error variance of an MB is straightforward but tedious. By applying POV to individual measurement error standard deviations they can estimate {sigma}{sub MB} (or more generally, they can estimate the variance-covariance matrix, {Sigma}, of a sequence of MBs). This report describes a new computer program (VPSim) that uses simulation to estimate the {Sigma} matrix of a sequence of MBs. Given the proper input data, VPSim calculates the MB and {sigma}{sub MB}, or calculates a sequence of n MBs and the associated n-by-n covariance matrix, {Sigma}. The covariance matrix, {Sigma}, contains the variance of each MB in the diagonal entries and the covariance between pairs of MBs in the off-diagonal entries.

  6. An overview of advanced reduction processes for bromate removal from drinking water: Reducing agents, activation methods, applications and mechanisms.

    PubMed

    Xiao, Qian; Yu, Shuili; Li, Lei; Wang, Ting; Liao, Xinlei; Ye, Yubing

    2017-02-15

    Bromate (BrO3(-)) is a possible human carcinogen regulated at a strict standard of 10μg/L in drinking water. Various techniques to eliminate BrO3(-) usually fall into three main categories: reducing bromide (Br(-)) prior to formation of BrO3(-), minimizing BrO3(-) formation during the ozonation process, and removing BrO3(-) from post-ozonation waters. However, the first two approaches exhibit low degradation efficiency and high treatment cost. The third workaround has obvious advantages, such as high reduction efficiency, more stable performance and easier combination with UV disinfection, and has therefore been widely implemented in water treatment. Recently, advanced reduction processes (ARPs), the photocatalysis of BrO3(-), have attracted much attention due to improved performance. To increase the feasibility of photocatalytic systems, the focus of this work concerns new technological developments, followed by a summary of reducing agents, activation methods, operational parameters, and applications. The reaction mechanisms of two typical processes involving UV/sulfite homogeneous photocatalysis and UV/titanium dioxide heterogeneous photocatalysis are further summarized. The future research needs for ARPs to reach full-scale potential in drinking water treatment are suggested accordingly. Copyright © 2016. Published by Elsevier B.V.

  7. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    SciTech Connect

    Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.; Menzies, W.R.; Smouse, S.M.; Stallings, J.W.

    1997-12-31

    This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.

  8. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Mean sojourn time, overdiagnosis, and reduction in advanced stage prostate cancer due to screening with PSA: implications of sojourn time on screening.

    PubMed

    Pashayan, N; Duffy, S W; Pharoah, P; Greenberg, D; Donovan, J; Martin, R M; Hamdy, F; Neal, D E

    2009-04-07

    This study aimed to assess the mean sojourn time (MST) of prostate cancer, to estimate the probability of overdiagnosis, and to predict the potential reduction in advanced stage disease due to screening with PSA. The MST of prostate cancer was derived from detection rates at PSA prevalence testing in 43,842 men, aged 50-69 years, as part of the ProtecT study, from the incidence of non-screen-detected cases obtained from the English population-based cancer registry database, and from PSA sensitivity obtained from the medical literature. The relative reduction in advanced stage disease was derived from the expected and observed incidences of advanced stage prostate cancer. The age-specific MST for men aged 50-59 and 60-69 years were 11.3 and 12.6 years, respectively. Overdiagnosis estimates increased with age; 10-31% of the PSA-detected cases were estimated to be overdiagnosed. An interscreening interval of 2 years was predicted to result in 37 and 63% reduction in advanced stage disease in men 65-69 and 50-54 years, respectively. If the overdiagnosed cases were excluded, the estimated reductions were 9 and 54%, respectively. Thus, the benefit of screening in reducing advanced stage disease is limited by overdiagnosis, which is greater in older men.

  10. Mean sojourn time, overdiagnosis, and reduction in advanced stage prostate cancer due to screening with PSA: implications of sojourn time on screening

    PubMed Central

    Pashayan, N; Duffy, S W; Pharoah, P; Greenberg, D; Donovan, J; Martin, R M; Hamdy, F; Neal, D E

    2009-01-01

    This study aimed to assess the mean sojourn time (MST) of prostate cancer, to estimate the probability of overdiagnosis, and to predict the potential reduction in advanced stage disease due to screening with PSA. The MST of prostate cancer was derived from detection rates at PSA prevalence testing in 43 842 men, aged 50–69 years, as part of the ProtecT study, from the incidence of non-screen-detected cases obtained from the English population-based cancer registry database, and from PSA sensitivity obtained from the medical literature. The relative reduction in advanced stage disease was derived from the expected and observed incidences of advanced stage prostate cancer. The age-specific MST for men aged 50–59 and 60–69 years were 11.3 and 12.6 years, respectively. Overdiagnosis estimates increased with age; 10–31% of the PSA-detected cases were estimated to be overdiagnosed. An interscreening interval of 2 years was predicted to result in 37 and 63% reduction in advanced stage disease in men 65–69 and 50–54 years, respectively. If the overdiagnosed cases were excluded, the estimated reductions were 9 and 54%, respectively. Thus, the benefit of screening in reducing advanced stage disease is limited by overdiagnosis, which is greater in older men. PMID:19293796

  11. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  12. Comparison of imputation variance estimators.

    PubMed

    Hughes, R A; Sterne, Jac; Tilling, K

    2016-12-01

    Appropriate imputation inference requires both an unbiased imputation estimator and an unbiased variance estimator. The commonly used variance estimator, proposed by Rubin, can be biased when the imputation and analysis models are misspecified and/or incompatible. Robins and Wang proposed an alternative approach, which allows for such misspecification and incompatibility, but it is considerably more complex. It is unknown whether in practice Robins and Wang's multiple imputation procedure is an improvement over Rubin's multiple imputation. We conducted a critical review of these two multiple imputation approaches, a re-sampling method called full mechanism bootstrapping and our modified Rubin's multiple imputation procedure via simulations and an application to data. We explored four common scenarios of misspecification and incompatibility. In general, for a moderate sample size (n = 1000), Robins and Wang's multiple imputation produced the narrowest confidence intervals, with acceptable coverage. For a small sample size (n = 100) Rubin's multiple imputation, overall, outperformed the other methods. Full mechanism bootstrapping was inefficient relative to the other methods and required modelling of the missing data mechanism under the missing at random assumption. Our proposed modification showed an improvement over Rubin's multiple imputation in the presence of misspecification. Overall, Rubin's multiple imputation variance estimator can fail in the presence of incompatibility and/or misspecification. For unavoidable incompatibility and/or misspecification, Robins and Wang's multiple imputation could provide more robust inferences. © The Author(s) 2014.

  13. Facing "the Curse of Dimensionality": Image Fusion and Nonlinear Dimensionality Reduction for Advanced Data Mining and Visualization of Astronomical Images

    NASA Astrophysics Data System (ADS)

    Pesenson, Meyer; Pesenson, I. Z.; McCollum, B.

    2009-05-01

    The complexity of multitemporal/multispectral astronomical data sets together with the approaching petascale of such datasets and large astronomical surveys require automated or semi-automated methods for knowledge discovery. Traditional statistical methods of analysis may break down not only because of the amount of data, but mostly because of the increase of the dimensionality of data. Image fusion (combining information from multiple sensors in order to create a composite enhanced image) and dimension reduction (finding lower-dimensional representation of high-dimensional data) are effective approaches to "the curse of dimensionality,” thus facilitating automated feature selection, classification and data segmentation. Dimension reduction methods greatly increase computational efficiency of machine learning algorithms, improve statistical inference and together with image fusion enable effective scientific visualization (as opposed to mere illustrative visualization). The main approach of this work utilizes recent advances in multidimensional image processing, as well as representation of essential structure of a data set in terms of its fundamental eigenfunctions, which are used as an orthonormal basis for the data visualization and analysis. We consider multidimensional data sets and images as manifolds or combinatorial graphs and construct variational splines that minimize certain Sobolev norms. These splines allow us to reconstruct the eigenfunctions of the combinatorial Laplace operator by using only a small portion of the graph. We use the first two or three eigenfunctions for embedding large data sets into two- or three-dimensional Euclidean space. Such reduced data sets allow efficient data organization, retrieval, analysis and visualization. We demonstrate applications of the algorithms to test cases from the Spitzer Space Telescope. This work was carried out with funding from the National Geospatial-Intelligence Agency University Research Initiative

  14. Practice reduces task relevant variance modulation and forms nominal trajectory

    NASA Astrophysics Data System (ADS)

    Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.

  15. Practice reduces task relevant variance modulation and forms nominal trajectory.

    PubMed

    Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-07

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.

  16. APFBC repowering could help meet Kyoto Protocol CO{sub 2} reduction goals[Advanced Pressurized Fluidized Bed Combustion

    SciTech Connect

    Weinstein, R.E.; Tonnemacher, G.C.

    1999-07-01

    The Clinton Administration signed the 1997 Kyoto Protocol agreement that would limit US greenhouse gas emissions, of which carbon dioxide (CO{sub 2}) is the most significant. While the Kyoto Protocol has not yet been submitted to the Senate for ratification, in the past, there have been few proposed environmental actions that had continued and wide-spread attention of the press and environmental activists that did not eventually lead to regulation. Since the Kyoto Protocol might lead to future regulation, its implications need investigation by the power industry. Limiting CO{sub 2} emissions affects the ability of the US to generate reliable, low cost electricity, and has tremendous potential impact on electric generating companies with a significant investment in coal-fired generation, and on their customers. This paper explores the implications of reducing coal plant CO{sub 2} by various amounts. The amount of reduction for the US that is proposed in the Kyoto Protocol is huge. The Kyoto Protocol would commit the US to reduce its CO{sub 2} emissions to 7% below 1990 levels. Since 1990, there has been significant growth in US population and the US economy driving carbon emissions 34% higher by year 2010. That means CO{sub 2} would have to be reduced by 30.9%, which is extremely difficult to accomplish. The paper tells why. There are, however, coal-based technologies that should be available in time to make significant reductions in coal-plant CO{sub 2} emissions. Th paper focuses on one plant repowering method that can reduce CO{sub 2} per kWh by 25%, advanced circulating pressurized fluidized bed combustion combined cycle (APFBC) technology, based on results from a recent APFBC repowering concept evaluation of the Carolina Power and Light Company's (CP and L) L.V. Sutton steam station. The replacement of the existing 50-year base of power generating units needed to meet proposed Kyoto Protocol CO{sub 2} reduction commitments would be a massive undertaking. It is

  17. Variance propagation by simulation (VPSim)

    SciTech Connect

    Burr, T.L.; Coulter, C.A.; Prommel, J.M.

    1997-07-01

    The application of propagation of variance (POV) for estimating the variance of a material balance is straightforward but tedious. Several computer codes exist today to help perform POV. Examples include MAWST (`materials accounting with sequential testing,` used by some Department of Energy sites) and VP (`variance propagation,` used for training). Also, some sites have such simple error models that custom `spreadsheet like` calculations are adequate. Any software to perform POV will have its strengths and weaknesses. A main disadvantage of MAWST is probably its limited form of error models. This limited form forces the user to use cryptic pseudo measurements to effectively extend the allowed error models. A common example is to include sampling error in the total random error by dividing the actual measurement into two pseudo measurements. Because POV can be tedious and input files can be presented in multiple ways to MAWST, it is valuable to have an alternative method to compare results. This paper describes a new code, VPSim, that uses Monte Carlo simulation to do POV. VPSim does not need to rely on pseudo measurements. It is written in C++, runs under Windows NT, and has a user friendly interface. VPSim has been tested on several example problems, and in this paper we compare its results to results from MAWST. We also describe its error models and indicate the structure of its input files. A main disadvantage of VPSim is its long run times. If many simulations are required (20,000 or more, repeated two or more times) and if each balance period has many (10,000 or more) measurements, then run times can be one-half hour or more. For small and modest sized problems, run times are a few minutes. The main advantage of VPSim is that its input files are simple to construct, and therefore also are relatively easy to inspect.

  18. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.

  19. Mitral disc-valve variance

    PubMed Central

    Berroya, Renato B.; Escano, Fernando B.

    1972-01-01

    This report deals with a rare complication of disc-valve prosthesis in the mitral area. A significant disc poppet and struts destruction of mitral Beall valve prostheses occurred 20 and 17 months after implantation. The resulting valve incompetence in the first case contributed to the death of the patient. The durability of Teflon prosthetic valves appears to be in question and this type of valve probably will be unacceptable if there is an increasing number of disc-valve variance in the future. Images PMID:5017573

  20. Cosmic Strings and Cosmic Variance

    NASA Astrophysics Data System (ADS)

    Gangui, Alejandro; Perivolaropoulos, Leandros

    1995-07-01

    By using a simple analytical model based on counting random multiple impulses inflicted on photons by a network of cosmic strings we show how to construct the general q-point temperature correlation function of the cosmic microwave background radiation. Our analysis is especially sensible for large angular scales where the Kaiser-Stebbins effect is dominant. Then we concentrate on the four-point function and, in particular, on its zero-lag limit, namely, the excess kurtosis parameter, for which we obtain a predicted value of ˜10-2. In addition, we estimate the cosmic variance for the kurtosis due to a Gaussian fluctuation field, showing its dependence on the primordial spectral index of density fluctuations n and finding agreement with previous published results for the particular case of a flat Harrison-Zel'dovich spectrum. Our value for the kurtosis compares well with previous analyses but falls below the threshold imposed by the cosmic variance when commonly accepted parameters from string simulations are considered. In particular the non-Gaussian signal is found to be inversely proportional to the scaling number of defects, as could be expected by the central limit theorem.

  1. A Wavelet Perspective on the Allan Variance.

    PubMed

    Percival, Donald B

    2016-04-01

    The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.

  2. The phenome-wide distribution of genetic variance.

    PubMed

    Blows, Mark W; Allen, Scott L; Collet, Julie M; Chenoweth, Stephen F; McGuigan, Katrina

    2015-07-01

    A general observation emerging from estimates of additive genetic variance in sets of functionally or developmentally related traits is that much of the genetic variance is restricted to few trait combinations as a consequence of genetic covariance among traits. While this biased distribution of genetic variance among functionally related traits is now well documented, how it translates to the broader phenome and therefore any trait combination under selection in a given environment is unknown. We show that 8,750 gene expression traits measured in adult male Drosophila serrata exhibit widespread genetic covariance among random sets of five traits, implying that pleiotropy is common. Ultimately, to understand the phenome-wide distribution of genetic variance, very large additive genetic variance-covariance matrices (G) are required to be estimated. We draw upon recent advances in matrix theory for completing high-dimensional matrices to estimate the 8,750-trait G and show that large numbers of gene expression traits genetically covary as a consequence of a single genetic factor. Using gene ontology term enrichment analysis, we show that the major axis of genetic variance among expression traits successfully identified genetic covariance among genes involved in multiple modes of transcriptional regulation. Our approach provides a practical empirical framework for the genetic analysis of high-dimensional phenome-wide trait sets and for the investigation of the extent of high-dimensional genetic constraint.

  3. Chapter 10: A Hilbert Space Approach To Variance Reduction

    DTIC Science & Technology

    2005-11-16

    text are presented in Avellaneda et al. (2001), and in Avellaneda and Gamba (2000). Consider the standard CV setting: (Y1,X1), . . . , (Yn,Xn) are...imization objective; this is the subject of Avellaneda and Gamba (2000), and Avellaneda et al. (2001). The important case of f(w) = w2 is considered next... Avellaneda , M., Buff, R., Friedman, C., Grandchamp, N., Kruk, L., Newman, J., 2001. Weighted Monte Carlo: A new technique for calibrating asset- pricing

  4. Global variance reduction for Monte Carlo reactor physics calculations

    SciTech Connect

    Zhang, Q.; Abdel-Khalik, H. S.

    2013-07-01

    Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 10{sup 3}-10{sup 5} times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)

  5. Variance Reduction for Quantile Estimates in Simulations Via Nonlinear Controls

    DTIC Science & Technology

    1990-04-01

    linear control depends upon the correlation between the statistic of interest and the control, which is often low. Since statistics often have a nonlinear...interest and the control reduces the effectiveness of the nonlinear control to that of the linear control . However, the data has to be sectioned to

  6. Feasibility Study of Variance Reduction in the Logistics Composite Model

    DTIC Science & Technology

    2007-03-01

    583.113861 1.7 Ybar 1.7219048 91 C15 – Multiple Controls (cont.) Y - Y (bar) product5 product10 product14 product15 product20 sq 5 sq 10 sq 14...simulation called Yμ for which Y is an estimator. Also, assume there is another output variable, X, that is correlated with the Y response and has an...expected value Xμ that is known. Since X is correlated with the Y variable, it is known as the control variable. Now consider the controlled

  7. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    SciTech Connect

    Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.; Ibrahim, Ahmad M.; Daily, Charles R.; Evans, Thomas M.; Wagner, John C.; Johnson, Jeffrey O.; Grove, Robert E.

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.

  8. Reduction in hospital costs and resource consumption associated with the use of advanced topical hemostats during inpatient procedures.

    PubMed

    Martyn, Derek; Kocharian, Richard; Lim, Sangtaeck; Meckley, Lisa M; Miyasato, Gavin; Prifti, Katerina; Rao, Yajing; Riebman, Jerome B; Scaife, Jillian G; Soneji, Yogesh; Corral, Mitra

    2015-06-01

    The use of hemostatic agents has increased over time for all surgical procedures. The purpose of this study was to evaluate the newer topical absorbable hemostat products Surgicel * Fibrillar † and Surgicel SNoW ‡ (Surgicel advanced products, abbreviated as SAPs) compared to the older product Surgicel Original (SO) with respect to healthcare resource use and costs in procedures where these hemostats are most commonly used. A retrospective analysis of the Premier hospital database was used to identify adults who underwent brain/cerebral (BC), cardiovascular (CV: valve surgery and coronary artery bypass graft) and carotid endarterectomy (CEA) between January 2011-December 2012. Among these patients, those treated with SAPs were compared to those treated with SO. Propensity score matching (PSM) was used to create comparable groups to evaluate differences between SAPs and SO. The primary end-points for this study were length of stay (LOS), all-cause total cost, number of intensive care unit (ICU) days, ICU cost, transfusion costs and units, and SO/SAP product units per discharge. Matched PSM created patient cohorts for SO and SAPs were created for BC (n = 758 for both groups), CV (n = 3388 for both groups), and CEA (n = 2041 for both groups) procedures. Patients that received SAPs had a 14-16% lower mean LOS for each procedure compared to SO, as well as 12-18% lower total mean cost per discharge for each procedure (p < 0.02 for all results). Mean ICU costs for SAPs were also lower, with a reduction of 20% for BC and 19% for CV compared to SO (p < 0.01). However, for CEA, there was no statistically significant difference in ICU costs for SAPs compared to SO. In a retrospective hospital database analysis, the use of SAPs were associated with lower healthcare resource utilization and costs compared to SO.

  9. Analytic variance estimates of Swank and Fano factors.

    PubMed

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank

    2014-07-01

    Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.

  10. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.

  11. Warped functional analysis of variance.

    PubMed

    Gervini, Daniel; Carter, Patrick A

    2014-09-01

    This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.

  12. A multi-variance analysis in the time domain

    NASA Technical Reports Server (NTRS)

    Walter, Todd

    1993-01-01

    Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.

  13. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air Pollution...

  14. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air Pollution...

  15. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air Pollution...

  16. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air Pollution...

  17. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air Pollution...

  18. VARAN: A Linear Model Variance Analysis Program.

    ERIC Educational Resources Information Center

    Hall, Charles E.; And Others

    This memorandum is the manual for the VARAN (VARiance ANalysis) program, which is the latest addition to a series of computer programs for multivariate analysis of variance. As with earlier programs, analysis of variance, univariate and multivariate, is the main target of the program. Correlation analysis of all types is available with printout in…

  19. Speed Variance and Its Influence on Accidents.

    ERIC Educational Resources Information Center

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  20. Restricted sample variance reduces generalizability.

    PubMed

    Lakes, Kimberley D

    2013-06-01

    One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context.

  1. VARIANCE OF MICROSOMAL PROTEIN AND ...

    EPA Pesticide Factsheets

    Differences in the pharmacokinetics of xenobiotics among humans makes them differentially susceptible to risk. Differences in enzyme content can mediate pharmacokinetic differences. Microsomal protein is often isolated fromliver to characterize enzyme content and activity, but no measures exist to extrapolate these data to the intact liver. Measures were developed from up to 60 samples of adult human liver to characterize the content of microsomal protein and cytochrome P450 (CYP) enzymes. Statistical evaluations are necessary to estimate values far from the mean value. Adult human liver contains 52.9 - 1.476 mg microsomal protein per g; 2587 - 1.84 pmoles CYP2E1 per g; and 5237 - 2.214 pmols CYP3A per g (geometric mean - geometric standard deviation). These values are useful for identifying and testing susceptibility as a function of enzyme content when used to extrapolate in vitro rates of chemical metabolism for input to physiologically based pharmacokinetic models which can then be exercised to quantify the effect of variance in enzyme expression on risk-relevant pharmacokinetic outcomes.

  2. [The role of technical & financial cooperation to advance nursing profession in the area of demand reduction in Latin America: challenges and perspectives].

    PubMed

    Wright, Maria da Gloria Miotto; Chisman, Anna McG; Mendes, Isabel Amélia Costa; Luis, Margarita Antonia Villar; Carvalho, Emilia Campos de; Mamede, Marli Villela

    2004-01-01

    New framework of Technical & Financial Cooperation (TFC) has been used to develop a partnership between an international organization and universities in Latin America to advance the contribution of nursing profession in the area of demand reduction. TFC purpose is to support development on specific issues or areas that needs to produce impact within the society as a whole. The "Regional Research Capacity-Building Program" for nurses to study the drug phenomenon in Latin America represents an example of new framework of TFC to enhance nurses to use science and technology in areas of health promotion, prevention of drug use and abuse, and social integration in Latin America. TFC has becomes a powerful instrument to advance nursing professional in the area of demand reduction.

  3. 48 CFR 970.5232-1 - Reduction or suspension of advance, partial, or progress payments upon finding of substantial...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...

  4. 48 CFR 970.5232-1 - Reduction or suspension of advance, partial, or progress payments upon finding of substantial...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...

  5. 48 CFR 970.5232-1 - Reduction or suspension of advance, partial, or progress payments upon finding of substantial...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...

  6. 48 CFR 970.5232-1 - Reduction or suspension of advance, partial, or progress payments upon finding of substantial...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...

  7. Comparison of desired radiographic advancement distance and true advancement distance required for patellar tendon-tibial plateau angle reduction to the ideal 90° in dogs by use of the modified Maquet technique.

    PubMed

    Pillard, Paul; Livet, Veronique; Cabon, Quentin; Bismuth, Camille; Sonet, Juliette; Remy, Denise; Fau, Didier; Carozzo, Claude; Viguier, Eric; Cachon, Thibaut

    2016-12-01

    OBJECTIVE To evaluate the validity of 2 radiographic methods for measurement of the tibial tuberosity advancement distance required to achieve a reduction in patellar tendon-tibial plateau angle (PTA) to the ideal 90° in dogs by use of the modified Maquet technique (MMT). SAMPLE 24 stifle joints harvested from 12 canine cadavers. PROCEDURES Radiographs of stifle joints placed at 135° in the true lateral position were used to measure the required tibial tuberosity advancement distance with the conventional (A(M)) and correction (A(E)) methods. The MMT was used to successively advance the tibial crest to A(M) and A(E). Postoperative PTA was measured on a mediolateral radiograph for each advancement measurement method. If none of the measurements were close to 90°, the advancement distance was modified until the PTA was equal to 90° within 0.1°, and the true advancement distance (TA) was measured. Results were used to determine the optimal commercially available size of cage implant that would be used in a clinical situation. RESULTS Median A(M) and A(E) were 10.6 mm and 11.5 mm, respectively. Mean PTAs for the conventional and correction methods were 93.4° and 92.3°, respectively, and differed significantly from 90°. Median TA was 13.5 mm. The A(M) and A(E) led to the same cage size recommendations as for TA for only 1 and 4 stifle joints, respectively. CONCLUSIONS AND CLINICAL RELEVANCE Both radiographic methods of measuring the distance required to advance the tibial tuberosity in dogs led to an under-reduction in postoperative PTA when the MMT was used. A new, more accurate radiographic method needs to be developed.

  8. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    USDA-ARS?s Scientific Manuscript database

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  9. Genetic Variance in the F2 Generation of Divergently Selected Parents

    Treesearch

    M.P. Koshy; G. Namkoong; J.H. Roberds

    1998-01-01

    Either by selective breeding for population divergence or by using natural population differences, F2 and advanced generation hybrids can be developed with high variances. We relate the size of the genetic variance to the population divergence based on a forward and backward mutation model at a locus with two alleles with additive gene action....

  10. Generalized Analysis of Molecular Variance

    PubMed Central

    Nievergelt, Caroline M; Libiger, Ondrej; Schork, Nicholas J

    2007-01-01

    Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA) strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA), requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms) or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by using it to analyze a

  11. Generalized analysis of molecular variance.

    PubMed

    Nievergelt, Caroline M; Libiger, Ondrej; Schork, Nicholas J

    2007-04-06

    Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA) strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA), requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms) or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by using it to analyze a

  12. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    PubMed

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  13. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  14. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  15. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  16. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  17. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  18. The phenotypic variance gradient - a novel concept.

    PubMed

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  19. Geographical differences in risk of advanced breast cancer: Limited evidence for reductions over time, Queensland, Australia 1997-2014.

    PubMed

    Dasgupta, Paramita; Youl, Philippa H; Aitken, Joanne F; Turrell, Gavin; Baade, Peter

    2017-10-03

    Reducing geographical inequalities in breast cancer stage remains a key focus of public health policy. We explored whether patterns of advanced breast cancer by residential accessibility and disadvantage in Queensland, Australia, have changed over time. Population-based cancer registry study of 38,706 women aged at least 30 years diagnosed with a first primary invasive breast cancer of known stage between 1997 and 2014. Multilevel logistic regression was used to examine temporal changes in associations of area-level factors with odds of advanced disease after adjustment for individual-level factors. Overall 19,401 (50%) women had advanced breast cancer. Women from the most disadvantaged areas had higher adjusted odds (OR = 1.23 [95%CI 1.13, 1.32]) of advanced disease than those from least disadvantaged areas, with no evidence this association had changed over time (interaction p = 0.197). Living in less accessible areas independently increased the adjusted odds (OR = 1.18 [1.09, 1.28]) of advanced disease, with some evidence that the geographical inequality had reduced over time (p = 0.045). Sensitivity analyses for un-staged cases showed that the original associations remained, regardless of assumptions made about the true stage distribution. Both geographical and residential socioeconomic inequalities in advanced stage diagnoses persist, potentially reflecting barriers in accessing diagnostic services. Given the role of screening mammography in early detection of breast cancer, the lack of population-based data on private screening limits our ability to determine overall participation rates by residential characteristics. Without such data, the efficacy of strategies to reduce inequalities in breast cancer stage will remain compromised. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Reduction of organic trace compounds and fresh water consumption by recovery of advanced oxidation processes treated industrial wastewater.

    PubMed

    Bierbaum, S; Öller, H-J; Kersten, A; Klemenčič, A Krivograd

    2014-01-01

    Ozone (O(3)) has been used successfully in advanced wastewater treatment in paper mills, other sectors and municipalities. To solve the water problems of regions lacking fresh water, wastewater treated by advanced oxidation processes (AOPs) can substitute fresh water in highly water-consuming industries. Results of this study have shown that paper strength properties are not impaired and whiteness is slightly impaired only when reusing paper mill wastewater. Furthermore, organic trace compounds are becoming an issue in the German paper industry. The results of this study have shown that AOPs are capable of improving wastewater quality by reducing organic load, colour and organic trace compounds.

  1. Nonlinear Epigenetic Variance: Review and Simulations

    ERIC Educational Resources Information Center

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  2. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...

  3. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...

  4. Variance Design and Air Pollution Control

    ERIC Educational Resources Information Center

    Ferrar, Terry A.; Brownstein, Alan B.

    1975-01-01

    Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…

  5. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... which the employer has taken to protect the health and safety of workers and adequately show that such... the health and safety of the workers. The RA shall send the approved variance to the employer and...

  6. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... which the employer has taken to protect the health and safety of workers and adequately show that such... the health and safety of the workers. The RA shall send the approved variance to the employer and...

  7. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... which the employer has taken to protect the health and safety of workers and adequately show that such... the health and safety of the workers. The RA shall send the approved variance to the employer and...

  8. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... which the employer has taken to protect the health and safety of workers and adequately show that such... the health and safety of the workers. The RA shall send the approved variance to the employer and...

  9. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... which the employer has taken to protect the health and safety of workers and adequately show that such... the health and safety of the workers. The RA shall send the approved variance to the employer and...

  10. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and health standard and, in addition to the content required by paragraph (c) of this section, must.... Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a written... standard, or portion thereof, from which the contractor seeks a variance; (4) A description of the steps...

  11. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and health standard and, in addition to the content required by paragraph (c) of this section, must.... Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a written... standard, or portion thereof, from which the contractor seeks a variance; (4) A description of the steps...

  12. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and health standard and, in addition to the content required by paragraph (c) of this section, must.... Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a written... standard, or portion thereof, from which the contractor seeks a variance; (4) A description of the steps...

  13. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  14. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  15. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  16. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  17. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  18. Variance Design and Air Pollution Control

    ERIC Educational Resources Information Center

    Ferrar, Terry A.; Brownstein, Alan B.

    1975-01-01

    Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…

  19. Nonlinear Epigenetic Variance: Review and Simulations

    ERIC Educational Resources Information Center

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  20. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  1. Ion-exchanged route synthesis of Fe2N-N-doped graphitic nanocarbons composite as advanced oxygen reduction electrocatalyst.

    PubMed

    Wang, Lei; Yin, Jie; Zhao, Lu; Tian, Chungui; Yu, Peng; Wang, Jianqiang; Fu, Honggang

    2013-04-14

    Fe2N nanoparticles and nitrogen-doped graphitic nanosheet composites (Fe2N-NGC) have been synthesized by an ion-exchanged route, which can serve as an efficient non-precious metal electrocatalyst with a 4e(-) reaction pathway for oxygen reduction reactions (ORR).

  2. 45 CFR 156.440 - Plans eligible for advance payments of the premium tax credit and cost-sharing reductions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... tax credit and cost-sharing reductions. 156.440 Section 156.440 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Health Insurance...

  3. Assessment of tumor size reduction improves outcome prediction of positron emission tomography/computed tomography after chemotherapy in advanced-stage Hodgkin lymphoma.

    PubMed

    Kobe, Carsten; Kuhnert, Georg; Kahraman, Deniz; Haverkamp, Heinz; Eich, Hans-Theodor; Franke, Mareike; Persigehl, Thorsten; Klutmann, Susanne; Amthauer, Holger; Bockisch, Andreas; Kluge, Regine; Wolf, Hans-Heinrich; Maintz, David; Fuchs, Michael; Borchmann, Peter; Diehl, Volker; Drzezga, Alexander; Engert, Andreas; Dietlein, Markus

    2014-06-10

    Positron emission tomography (PET) after chemotherapy can guide consolidating radiotherapy in advanced-stage Hodgkin lymphoma (HL). This analysis aims to improve outcome prediction by integrating additional criteria derived by computed tomography (CT). The analysis set consisted of 739 patients with residues≥2.5 cm after chemotherapy from a total of 2,126 patients treated in the HD15 trial (HD15 for advanced stage Hodgkin's disease: Quality assurance protocol for reduction of toxicity and the prognostic relevance of fluorodeoxyglucose-positron-emission tomography [FDG-PET] in the first-line treatment of advanced-stage Hodgkin's disease) performed by the German Hodgkin Study Group. A central panel performed image analysis and interpretation of CT scans before and after chemotherapy as well as PET scans after chemotherapy. Prognosis was evaluated by using progression-free survival (PFS); groups were compared with the log-rank test. Potential prognostic factors were investigated by using receiver operating characteristic analysis and logistic regression. In all, 548 (74%) of 739 patients had PET-negative residues after chemotherapy; these patients did not receive additional radiotherapy and showed a 4-year PFS of 91.5%. The 191 PET-positive patients (26%) receiving additional radiotherapy had a 4-year PFS of 86.1% (P=.022). CT alone did not allow further separation of patients in partial remission by risk of recurrence (P=.9). In the subgroup of the 54 PET-positive patients with a relative reduction of less than 40%, the risk of progression or relapse within the first year was 23.1% compared with 5.3% for patients with a larger reduction (difference, 17.9%; 95% CI, 5.8% to 30%). Patients with HL who have PET-positive residual disease after chemotherapy and poor tumor shrinkage are at high risk of progression or relapse. © 2014 by American Society of Clinical Oncology.

  4. Variance Assistance Document: Land Disposal Restrictions Treatability Variances and Determinations of Equivalent Treatment

    EPA Pesticide Factsheets

    This document provides assistance to those seeking to submit a variance request for LDR treatability variances and determinations of equivalent treatment regarding the hazardous waste land disposal restrictions program.

  5. Functional analysis of variance for association studies.

    PubMed

    Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing

    2014-01-01

    While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.

  6. Can currently available advanced combustion biomass cook-stoves provide health relevant exposure reductions? Results from initial assessment of select commercial models in India.

    PubMed

    Sambandam, Sankar; Balakrishnan, Kalpana; Ghosh, Santu; Sadasivam, Arulselvan; Madhav, Satish; Ramasamy, Rengaraj; Samanta, Maitreya; Mukhopadhyay, Krishnendu; Rehman, Hafeez; Ramanathan, Veerabhadran

    2015-03-01

    Household air pollution from use of solid fuels is a major contributor to the national burden of disease in India. Currently available models of advanced combustion biomass cook-stoves (ACS) report significantly higher efficiencies and lower emissions in the laboratory when compared to traditional cook-stoves, but relatively little is known about household level exposure reductions, achieved under routine conditions of use. We report results from initial field assessments of six commercial ACS models from the states of Tamil Nadu and Uttar Pradesh in India. We monitored 72 households (divided into six arms to each receive an ACS model) for 24-h kitchen area concentrations of PM2.5 and CO before and (1-6 months) after installation of the new stove together with detailed information on fixed and time-varying household characteristics. Detailed surveys collected information on user perceptions regarding acceptability for routine use. While the median percent reductions in 24-h PM2.5 and CO concentrations ranged from 2 to 71% and 10-66%, respectively, concentrations consistently exceeded WHO air quality guideline values across all models raising questions regarding the health relevance of such reductions. Most models were perceived to be sub-optimally designed for routine use often resulting in inappropriate and inadequate levels of use. Household concentration reductions also run the risk of being compromised by high ambient backgrounds from community level solid-fuel use and contributions from surrounding fossil fuel sources. Results indicate that achieving health relevant exposure reductions in solid-fuel using households will require integration of emissions reductions with ease of use and adoption at community scale, in cook-stove technologies. Imminent efforts are also needed to accelerate the progress towards cleaner fuels.

  7. Some Investigations on Hardness of Investment Casting Process After Advancements in Shell Moulding for Reduction in Cycle Time

    NASA Astrophysics Data System (ADS)

    Singh, R.; Mahajan, V.

    2014-07-01

    In the present work surface hardness investigations have been made on acrylonitrile butadiene styrene (ABS) pattern based investment castings after advancements in shell moulding for replication of biomedical implants. For the present study, a hip joint, made of ABS material, was fabricated as a master pattern by fused deposition modelling (FDM). After preparation of master pattern, mold was prepared by deposition of primary (1°), secondary (2°) and tertiary (3°) coatings with the addition of nylon fibre (1-2 cm in length of 1.5D). This study outlines the surface hardness mechanism for cast component prepared from ABS master pattern after advancement in shell moulding. The results of study highlight that during shell production, fibre modified shells have a much reduced drain time. Further the results are supported by cooling rate and micro structure analysis of casting.

  8. Effectivity of advanced wastewater treatment: reduction of in vitro endocrine activity and mutagenicity but not of in vivo reproductive toxicity.

    PubMed

    Giebner, Sabrina; Ostermann, Sina; Straskraba, Susanne; Oetken, Matthias; Oehlmann, Jörg; Wagner, Martin

    2016-09-06

    Conventional wastewater treatment plants (WWTPs) have a limited capacity to eliminate micropollutants. One option to improve this is tertiary treatment. Accordingly, the WWTP Eriskirch at the German river Schussen has been upgraded with different combinations of ozonation, sand, and granulated activated carbon filtration. In this study, the removal of endocrine and genotoxic effects in vitro and reproductive toxicity in vivo was assessed in a 2-year long-term monitoring. All experiments were performed with aqueous and solid-phase extracted water samples. Untreated wastewater affected several endocrine endpoints in reporter gene assays. The conventional treatment removed the estrogenic and androgenic activity by 77 and 95 %, respectively. Nevertheless, high anti-estrogenic activities and reproductive toxicity persisted. All advanced treatment technologies further reduced the estrogenic activities by additional 69-86 % compared to conventional treatment, resulting in a complete removal of up to 97 %. In the Ames assay, we detected an ozone-induced mutagenicity, which was removed by subsequent filtration. This demonstrates that a post treatment to ozonation is needed to minimize toxic oxidative transformation products. In the reproduction test with the mudsnail Potamopyrgus antipodarum, a decreased number of embryos was observed for all wastewater samples. This indicates that reproductive toxicants were eliminated by neither the conventional nor the advanced treatment. Furthermore, aqueous samples showed higher anti-estrogenic and reproductive toxicity than extracted samples, indicating that the causative compounds are not extractable or were lost during extraction. This underlines the importance of the adequate handling of wastewater samples. Taken together, this study demonstrates that combinations of multiple advanced technologies reduce endocrine effects in vitro. However, they did not remove in vitro anti-estrogenicity and in vivo reproductive toxicity. This

  9. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  10. Variance estimation for stratified propensity score estimators.

    PubMed

    Williamson, E J; Morley, R; Lucas, A; Carpenter, J R

    2012-07-10

    Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.

  11. Neural field theory with variance dynamics.

    PubMed

    Robinson, P A

    2013-06-01

    Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.

  12. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  13. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  14. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  15. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  16. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  17. 13 CFR 307.22 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law. ...

  18. Reducing variance in batch partitioning measurements

    SciTech Connect

    Mariner, Paul E.

    2010-08-11

    The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

  19. Variance components in discrete force production tasks.

    PubMed

    Varadhan, S K M; Zatsiorsky, Vladimir M; Latash, Mark L

    2010-09-01

    The study addresses the relationships between task parameters and two components of variance, "good" and "bad", during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does ("bad") and does not ("good") affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the "bad" variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. "Good" variance scaled linearly with force magnitude and did not depend on force rate. "Bad" variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic tasks was associated with

  20. Variance Components in Discrete Force Production Tasks

    PubMed Central

    SKM, Varadhan; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2010-01-01

    The study addresses the relationships between task parameters and two components of variance, “good” and “bad”, during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does (“bad”) and does not (“good”) affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the “bad” variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. “Good” variance scaled linearly with force magnitude and did not depend on force rate. “Bad” variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic

  1. Advanced Subsonic Technology (AST) Separate-Flow High-Bypass Ratio Nozzle Noise Reduction Program Test Report

    NASA Technical Reports Server (NTRS)

    Low, John K. C.; Schweiger, Paul S.; Premo, John W.; Barber, Thomas J.; Saiyed, Naseem (Technical Monitor)

    2000-01-01

    NASA s model-scale nozzle noise tests show that it is possible to achieve a 3 EPNdB jet noise reduction with inwardfacing chevrons and flipper-tabs installed on the primary nozzle and fan nozzle chevrons. These chevrons and tabs are simple devices and are easy to be incorporated into existing short duct separate-flow nonmixed nozzle exhaust systems. However, these devices are expected to cause some small amount of thrust loss relative to the axisymmetric baseline nozzle system. Thus, it is important to have these devices further tested in a calibrated nozzle performance test facility to quantify the thrust performances of these devices. The choice of chevrons or tabs for jet noise suppression would most likely be based on the results of thrust loss performance tests to be conducted by Aero System Engineering (ASE) Inc. It is anticipated that the most promising concepts identified from this program will be validated in full scale engine tests at both Pratt & Whitney and Allied-Signal, under funding from NASA s Engine Validation of Noise Reduction Concepts (EVNRC) programs. This will bring the technology readiness level to the point where the jet noise suppression concepts could be incorporated with high confidence into either new or existing turbofan engines having short-duct, separate-flow nacelles.

  2. Phonocardiographic diagnosis of aortic ball variance.

    PubMed

    Hylen, J C; Kloster, F E; Herr, R H; Hull, P Q; Ames, A W; Starr, A; Griswold, H E

    1968-07-01

    Fatty infiltration causing changes in the silastic poppet of the Model 1000 series Starr-Edwards aortic valve prostheses (ball variance) has been detected with increasing frequency and can result in sudden death. Phonocardiograms were recorded on 12 patients with ball variance confirmed by operation and of 31 controls. Ten of the 12 patients with ball variance were distinguished from the controls by an aortic opening sound (AO) less than half as intense as the aortic closure sound (AC) at the second right intercostal space (AO/AC ratio less than 0.5). Both AO and AC were decreased in two patients with ball variance, with the loss of the characteristic high frequency and amplitude of these sounds. The only patient having a diminished AO/AC ratio (0.42) without ball variance at reoperation had a clot extending over the aortic valve struts. The phonocardiographic findings have been the most reliable objective evidence of ball variance in patients with Starr-Edwards aortic prosthesis of the Model 1000 series.

  3. Variational bayesian method of estimating variance components.

    PubMed

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. © 2016 Japanese Society of Animal Science.

  4. Postsynaptic degeneration as revealed by PSD-95 reduction occurs after advanced Aβ and tau pathology in transgenic mouse models of Alzheimer's disease.

    PubMed

    Shao, Charles Y; Mirra, Suzanne S; Sait, Hameetha B R; Sacktor, Todd C; Sigurdsson, Einar M

    2011-09-01

    Impairment of synaptic plasticity underlies memory dysfunction in Alzheimer's disease (AD). Molecules involved in this plasticity such as PSD-95, a major postsynaptic scaffold protein at excitatory synapses, may play an important role in AD pathogenesis. We examined the distribution of PSD-95 in transgenic mice of amyloidopathy (5XFAD) and tauopathy (JNPL3) as well as in AD brains using double-labeling immunofluorescence and confocal microscopy. In wild type control mice, PSD-95 primarily labeled neuropil with distinct distribution in hippocampal apical dendrites. In 3-month-old 5XFAD mice, PSD-95 distribution was similar to that of wild type mice despite significant Aβ deposition. However, in 6-month-old 5XFAD mice, PSD-95 immunoreactivity in apical dendrites markedly decreased and prominent immunoreactivity was noted in neuronal soma in CA1 neurons. Similarly, PSD-95 immunoreactivity disappeared from apical dendrites and accumulated in neuronal soma in 14-month-old, but not in 3-month-old, JNPL3 mice. In AD brains, PSD-95 accumulated in Hirano bodies in hippocampal neurons. Our findings support the notion that either Aβ or tau can induce reduction of PSD-95 in excitatory synapses in hippocampus. Furthermore, this PSD-95 reduction is not an early event but occurs as the pathologies advance. Thus, the time-dependent PSD-95 reduction from synapses and accumulation in neuronal soma in transgenic mice and Hirano bodies in AD may mark postsynaptic degeneration that underlies long-term functional deficits.

  5. Giardia duodenalis: Number and Fluorescence Reduction Caused by the Advanced Oxidation Process (H2O2/UV).

    PubMed

    Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; Dos Santos, Luciana Urbano

    2014-01-01

    This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L(-1) and a UV dose (λ = 254 nm) of 5,480 mJcm(-2). The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number of G. duodenalis cysts in synthetic and surface water and was most effective in reducing the fluorescence of the cyst walls that were present in the surface water. The AOP showed a higher deleterious potential for G. duodenalis cysts than either peroxidation (H2O2) or photolysis (UV) processes alone.

  6. Giardia duodenalis: Number and Fluorescence Reduction Caused by the Advanced Oxidation Process (H2O2/UV)

    PubMed Central

    Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; dos Santos, Luciana Urbano

    2014-01-01

    This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L−1 and a UV dose (λ = 254 nm) of 5,480 mJcm−2. The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number of G. duodenalis cysts in synthetic and surface water and was most effective in reducing the fluorescence of the cyst walls that were present in the surface water. The AOP showed a higher deleterious potential for G. duodenalis cysts than either peroxidation (H2O2) or photolysis (UV) processes alone. PMID:27379301

  7. Simulated flight acoustic investigation of treated ejector effectiveness on advanced mechanical suppresors for high velocity jet noise reduction

    NASA Technical Reports Server (NTRS)

    Brausch, J. F.; Motsinger, R. E.; Hoerst, D. J.

    1986-01-01

    Ten scale-model nozzles were tested in an anechoic free-jet facility to evaluate the acoustic characteristics of a mechanically suppressed inverted-velocity-profile coannular nozzle with an accoustically treated ejector system. The nozzle system used was developed from aerodynamic flow lines evolved in a previous contract, defined to incorporate the restraints imposed by the aerodynamic performance requirements of an Advanced Supersonic Technology/Variable Cycle Engine system through all its mission phases. Accoustic data of 188 test points were obtained, 87 under static and 101 under simulated flight conditions. The tests investigated variables of hardwall ejector application to a coannular nozzle with 20-chute outer annular suppressor, ejector axial positioning, treatment application to ejector and plug surfaces, and treatment design. Laser velocimeter, shadowgraph photograph, aerodynamic static pressure, and temperature measurement were acquired on select models to yield diagnositc information regarding the flow field and aerodynamic performance characteristics of the nozzles.

  8. Encoding of natural sounds by variance of the cortical local field potential

    PubMed Central

    Simon, Jonathan Z.; Shamma, Shihab A.; David, Stephen V.

    2016-01-01

    Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex. PMID:26912594

  9. Summary Report of Advanced Hydropower Innovations and Cost Reduction Workshop at Arlington, VA, November 5 & 6, 2015

    SciTech Connect

    O'Connor, Patrick; Rugani, Kelsey; West, Anna

    2016-03-01

    On behalf of the U.S. Department of Energy (DOE) Wind and Water Power Technology Office (WWPTO), Oak Ridge National Laboratory (ORNL), hosted a day and half long workshop on November 5 and 6, 2015 in the Washington, D.C. metro area to discuss cost reduction opportunities in the development of hydropower projects. The workshop had a further targeted focus on the costs of small, low-head1 facilities at both non-powered dams (NPDs) and along undeveloped stream reaches (also known as New Stream-Reach Development or “NSD”). Workshop participants included a cross-section of seasoned experts, including project owners and developers, engineering and construction experts, conventional and next-generation equipment manufacturers, and others to identify the most promising ways to reduce costs and achieve improvements for hydropower projects.

  10. A concise guide to sustainable PEMFCs: recent advances in improving both oxygen reduction catalysts and proton exchange membranes.

    PubMed

    Scofield, Megan E; Liu, Haiqing; Wong, Stanislaus S

    2015-08-21

    The rising interest in fuel cell vehicle technology (FCV) has engendered a growing need and realization to develop rational chemical strategies to create highly efficient, durable, and cost-effective fuel cells. Specifically, technical limitations associated with the major constituent components of the basic proton exchange membrane fuel cell (PEMFC), namely the cathode catalyst and the proton exchange membrane (PEM), have proven to be particularly demanding to overcome. Therefore, research trends within the community in recent years have focused on (i) accelerating the sluggish kinetics of the catalyst at the cathode and (ii) minimizing overall Pt content, while simultaneously (a) maximizing activity and durability as well as (b) increasing membrane proton conductivity without causing any concomitant loss in either stability or as a result of damage due to flooding. In this light, as an example, high temperature PEMFCs offer a promising avenue to improve the overall efficiency and marketability of fuel cell technology. In this Critical Review, recent advances in optimizing both cathode materials and PEMs as well as the future and peculiar challenges associated with each of these systems will be discussed.

  11. Advanced-warning system risk-reduction experiments: the Multispectral Measurements Program (MSMP) and the Balloon Altitude Mosaic Measurements (BAMM)

    NASA Astrophysics Data System (ADS)

    Hasegawa, Ken R.

    2000-12-01

    MSMP and BAMM were commissioned by the Air Force Space Division (AFSD) in the late seventies to generate data in support of the Advanced Warning System (AWS), a development activity to replace the space-based surveillance satellites of the Defense Support Program (DSP). These programs were carried out by the Air Force Geophysics Laboratory with planning and mentoring by Irving Spiro of The Aerospace Corporation, acting on behalf of the program managers, 1st Lt. Todd Frantz, 1st Lt. Gordon Frantom, and 1st Lt. Ken Hasegawa of the technology program office at AFSD. The motivation of MSMP was the need for characterizing the exhaust plumes of the thrusters aboard post-boost vehicles, a primary target for the infrared sensors of the proposed AWS system. To that end, the experiments consisted of a series of Aries rocket launches from White Sands Missile Range in which dual payloads were carried aloft and separately deployed at altitudes above 100 km. One module contained an ensemble of sensors spanning the spectrum from the vacuum ultraviolet to the long wave infrared, all slaved to an rf tracker locked onto a beacon on the target module. The target was a small pressure-fed liquid-propellant rocket engine, a modified Atlas vernier, programmed for a series of maneuvers in the vicinity of the instrument module. As part of this program, diagnostic measurements of the target engine exhaust were made at Rocketdyne, and shock tube experiments on excitation processes were carried out by staff members of Calspan.

  12. Amplitude Reduction and Phase Shifts of Melatonin, Cortisol and Other Circadian Rhythms after a Gradual Advance of Sleep and Light Exposure in Humans

    PubMed Central

    Dijk, Derk-Jan; Duffy, Jeanne F.; Silva, Edward J.; Shanahan, Theresa L.; Boivin, Diane B.; Czeisler, Charles A.

    2012-01-01

    Background The phase and amplitude of rhythms in physiology and behavior are generated by circadian oscillators and entrained to the 24-h day by exposure to the light-dark cycle and feedback from the sleep-wake cycle. The extent to which the phase and amplitude of multiple rhythms are similarly affected during altered timing of light exposure and the sleep-wake cycle has not been fully characterized. Methodology/Principal Findings We assessed the phase and amplitude of the rhythms of melatonin, core body temperature, cortisol, alertness, performance and sleep after a perturbation of entrainment by a gradual advance of the sleep-wake schedule (10 h in 5 days) and associated light-dark cycle in 14 healthy men. The light-dark cycle consisted either of moderate intensity ‘room’ light (∼90–150 lux) or moderate light supplemented with bright light (∼10,000 lux) for 5 to 8 hours following sleep. After the advance of the sleep-wake schedule in moderate light, no significant advance of the melatonin rhythm was observed whereas, after bright light supplementation the phase advance was 8.1 h (SEM 0.7 h). Individual differences in phase shifts correlated across variables. The amplitude of the melatonin rhythm assessed under constant conditions was reduced after moderate light by 54% (17–94%) and after bright light by 52% (range 12–84%), as compared to the amplitude at baseline in the presence of a sleep-wake cycle. Individual differences in amplitude reduction of the melatonin rhythm correlated with the amplitude of body temperature, cortisol and alertness. Conclusions/Significance Alterations in the timing of the sleep-wake cycle and associated bright or moderate light exposure can lead to changes in phase and reduction of circadian amplitude which are consistent across multiple variables but differ between individuals. These data have implications for our understanding of circadian organization and the negative health outcomes associated with shift-work, jet

  13. Effect of advanced aftertreatment for PM and NOx reduction on heavy-duty diesel engine ultrafine particle emissions.

    PubMed

    Herner, Jorn Dinh; Hu, Shaohua; Robertson, William H; Huai, Tao; Chang, M-C Oliver; Rieger, Paul; Ayala, Alberto

    2011-03-15

    Four heavy-duty and medium-duty diesel vehicles were tested in six different aftertreament configurations using a chassis dynamometer to characterize the occurrence of nucleation (the conversion of exhaust gases to particles upon dilution). The aftertreatment included four different diesel particulate filters and two selective catalytic reduction (SCR) devices. All DPFs reduced the emissions of solid particles by several orders of magnitude, but in certain cases the occurrence of a volatile nucleation mode could increase total particle number emissions. The occurrence of a nucleation mode could be predicted based on the level of catalyst in the aftertreatment, the prevailing temperature in the aftertreatment, and the age of the aftertreatment. The particles measured during nucleation had a high fraction of sulfate, up to 62% of reconstructed mass. Additionally the catalyst reduced the toxicity measured in chemical and cellular assays suggesting a pathway for an inverse correlation between particle number and toxicity. The results have implications for exposure to and toxicity of diesel PM.

  14. Modality-Driven Classification and Visualization of Ensemble Variance

    SciTech Connect

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  15. Variance Decomposition Using an IRT Measurement Model

    PubMed Central

    Glas, Cees A. W.; Boomsma, Dorret I.

    2007-01-01

    Large scale research projects in behaviour genetics and genetic epidemiology are often based on questionnaire or interview data. Typically, a number of items is presented to a number of subjects, the subjects’ sum scores on the items are computed, and the variance of sum scores is decomposed into a number of variance components. This paper discusses several disadvantages of the approach of analysing sum scores, such as the attenuation of correlations amongst sum scores due to their unreliability. It is shown that the framework of Item Response Theory (IRT) offers a solution to most of these problems. We argue that an IRT approach in combination with Markov chain Monte Carlo (MCMC) estimation provides a flexible and efficient framework for modelling behavioural phenotypes. Next, we use data simulation to illustrate the potentially huge bias in estimating variance components on the basis of sum scores. We then apply the IRT approach with an analysis of attention problems in young adult twins where the variance decomposition model is extended with an IRT measurement model. We show that when estimating an IRT measurement model and a variance decomposition model simultaneously, the estimate for the heritability of attention problems increases from 40% (based on sum scores) to 73%. PMID:17534709

  16. Discrimination of frequency variance for tonal sequences.

    PubMed

    Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A

    2014-12-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) >  σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.

  17. Variance estimation for nucleotide substitution models.

    PubMed

    Chen, Weishan; Wang, Hsiuying

    2015-09-01

    The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.

  18. 180 MW demonstration of advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO sub x ) emissions from coal-fired boilers

    SciTech Connect

    Not Available

    1991-01-01

    ABB CE's Low NOx Bulk Furnace Staging (LNBFS) System and Low NOx Concentric Firing System (LNCFS) are demonstrated in stepwise fashion. These systems incorporate the concept of advanced overfire air (AOFA), clustered coal nozzles, and offset air. A complete description of the installed technologies is provided in the following section. The primary objective of the Plant Lansing Smith demonstration is to determine the long-term effects of commercially available tangentially-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology are also being performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications has been established for the project.

  19. Reduced Variance for Material Sources in Implicit Monte Carlo

    SciTech Connect

    Urbatsch, Todd J.

    2012-06-25

    Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.

  20. NiCo2O4/N-doped graphene as an advanced electrocatalyst for oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Li, Huiyong; Wang, Haiyan; He, Kejian; Wang, Shuangyin; Tang, Yougen; Chen, Jiajie

    2015-04-01

    Developing low-cost catalyst for high-performance oxygen reduction reaction (ORR) is highly desirable. Herein, NiCo2O4/N-doped reduced graphene oxide (NiCo2O4/N-rGO) hybrid is proposed as a high-performance catalyst for ORR for the first time. The well-formed NiCo2O4/N-rGO hybrid is studied by cyclic voltammetry (CV) curves and linear-sweep voltammetry (LSV) performed on the rotating-ring-disk-electrode (RDE) in comparison with N-rGO-free NiCo2O4 and the bare N-rGO. Due to the synergistic effect, the NiCo2O4/N-rGO hybrid exhibits significant improvement of catalytic performance with an onset potential of -0.12 V, which mainly favors a direct four electron pathway in ORR process, close to the behavior of commercial carbon-supported Pt. Also, the benefits of N-incorporation are investigated by comparing NiCo2O4/N-rGO with NiCo2O4/rGO, where higher cathodic currents, much more positive half-wave potential and more electron transfer numbers are observed for the N-doping one, which should be ascribed to the new highly efficient active sites created by N incorporation into graphene. The NiCo2O4/N-rGO hybrid could be used as a promising catalyst for high power metal/air battery.

  1. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  2. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO[sub x]) emissions from coal-fired boilers

    SciTech Connect

    Not Available

    1992-01-01

    The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO[sub x] burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO[sub x] reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulatecharacteristics and boiler efficiency. Baseline, AOFA, and LNB without AOFA test segments have been completed. Analysis of the 94 days of LNB long-term data collected show the full-load NO[sub x] emission levels to be approximately 0.65 lb/MBtu. Flyash LOI values for the LNB configuration are approximately 8 percent at full-load. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. Abbreviated diagnostic tests for the LNB+AOFA configuration indicate that at 500 MWe, NO[sub x] emissions are approximately 0.55 lb/MBtu with corresponding flyash LOI values of approximately 11 percent. For comparison, the long-term full-load, baseline NO[sub x] emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing of the LNB+AOFA configuration will be performed when the stackparticulate emissions issue is resolved. Testing of a process optimization package on Plant Hammond Unit 4 was performed during this quarter. The software was configured to minimize NO[sub x] emissions using total combustion air flow and advanced overfire air distribution as the controlled parameters. Preliminary results from this testing indicate that this package shows promise in reducing NO[sub x] emissions while maintaining or improving other boiler performance parameters.

  3. Advanced metal artifact reduction MRI of metal-on-metal hip resurfacing arthroplasty implants: compressed sensing acceleration enables the time-neutral use of SEMAC.

    PubMed

    Fritz, Jan; Fritz, Benjamin; Thawait, Gaurav K; Raithel, Esther; Gilson, Wesley D; Nittka, Mathias; Mont, Michael A

    2016-10-01

    Compressed sensing (CS) acceleration has been theorized for slice encoding for metal artifact correction (SEMAC), but has not been shown to be feasible. Therefore, we tested the hypothesis that CS-SEMAC is feasible for MRI of metal-on-metal hip resurfacing implants. Following prospective institutional review board approval, 22 subjects with metal-on-metal hip resurfacing implants underwent 1.5 T MRI. We compared CS-SEMAC prototype, high-bandwidth TSE, and SEMAC sequences with acquisition times of 4-5, 4-5 and 10-12 min, respectively. Outcome measures included bone-implant interfaces, image quality, periprosthetic structures, artifact size, and signal- and contrast-to-noise ratios (SNR and CNR). Using Friedman, repeated measures analysis of variances, and Cohen's weighted kappa tests, Bonferroni-corrected p-values of 0.005 and less were considered statistically significant. There was no statistical difference of outcomes measures of SEMAC and CS-SEMAC images. Visibility of implant-bone interfaces and pseudocapsule as well as fat suppression and metal reduction were "adequate" to "good" on CS-SEMAC and "non-diagnostic" to "adequate" on high-BW TSE (p < 0.001, respectively). SEMAC and CS-SEMAC showed mild blur and ripple artifacts. The metal artifact size was 63 % larger for high-BW TSE as compared to SEMAC and CS-SEMAC (p < 0.0001, respectively). CNRs were sufficiently high and statistically similar, with the exception of CNR of fluid and muscle and CNR of fluid and tendon, which were higher on intermediate-weighted high-BW TSE (p < 0.005, respectively). Compressed sensing acceleration enables the time-neutral use of SEMAC for MRI of metal-on-metal hip resurfacing implants when compared to high-BW TSE and image quality similar to conventional SEMAC.

  4. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  5. REDUCTION IN HEPATIC INFLAMMATION IS ASSOCIATED WITH LESS FIBROSIS PROGRESSION AND FEWER CLINICAL OUTCOMES IN ADVANCED HEPATITIS C

    PubMed Central

    Morishima, Chihiro; Shiffman, Mitchell L.; Dienstag, Jules L.; Lindsay, Karen L; Szabo, Gyongyi; Everson, Gregory T.; Lok, Anna S.; Di Bisceglie, Adrian M.; Ghany, Marc G.; Naishadham, Deepa; Morgan, Timothy R.; Wright, Elizabeth C.

    2013-01-01

    Objective During the Hepatitis C Antiviral Long-term Treatment against Cirrhosis Trial, 3.5 years of maintenance peginterferon-alfa-2a therapy did not affect liver fibrosis progression or clinical outcomes among 1,050 prior interferon nonresponders with advanced fibrosis or cirrhosis. We investigated whether reduced hepatic inflammation was associated with clinical benefit in 834 patients with a baseline and follow-up biopsy 1.5 years after randomization to peginterferon or observation. Methods Relationships between change in hepatic inflammation (Ishak HAI) and serum ALT, fibrosis progression and clinical outcomes after randomization, and HCV RNA decline before and after randomization were evaluated. Histologic change was defined as a ≥2-point difference in HAI or Ishak fibrosis score between biopsies. Results Among 657 patients who received full-dose peginterferon/ribavirin “lead-in” therapy before randomization, year-1.5 HAI improvement was associated with lead-in HCV RNA suppression in both randomized treated (P <0.0001) and control (P = 0.0001) groups, even in the presence of recurrent viremia. This relationship persisted at year 3.5 in both treated (P = 0.001) and control (P = 0.01) groups. Among 834 patients followed for a median of 6 years, fewer clinical outcomes occurred in patients with improved HAI at year 1.5 compared to those without such improvement in both treated (P = 0.03) and control (P = 0.05) groups. Among patients with Ishak 3–4 fibrosis at baseline, those with improved HAI at year 1.5 had less fibrosis progression at year 1.5 in both treated (P = 0.0003) and control (P = 0.02) groups. Conclusion Reduced hepatic inflammation (measured 1.5 and 3.5 years after randomization) was associated with profound virological suppression during lead-in treatment with full-dose peginterferon/ribavirin and with decreased fibrosis progression and clinical outcomes, independent of randomized treatment. PMID:22688849

  6. Variance in binary stellar population synthesis

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  7. Advances in the reduction and compensation of film stress in high-reflectance multilayer coatings for extreme ultraviolet lithography applications

    SciTech Connect

    Mirkarimi, P.B., LLNL

    1998-02-20

    Due to the stringent surface figure requirements for the multilayer-coated optics in an extreme ultraviolet (EUV) projection lithography system, it is desirable to minimize deformation due to the multilayer film stress. However, the stress must be reduced or compensated without reducing EUV reflectivity, since the reflectivity has a strong impact on the throughput of a EUV lithography tool. In this work we identify and evaluate several leading techniques for stress reduction and compensation as applied to Mo/Si and Mo/Be multilayer films. The measured film stress for Mo/Si films with EUV reflectances near 67.4% at 13.4 nm is approximately - 420 MPa (compressive), while it is approximately +330 MPa (tensile) for Mo/Be films with EUV reflectances near 69.4% at 11.4 nm. Varying the Mo-to-Si ratio can be used to reduce the stress to near zero levels, but at a large loss in EUV reflectance (> 20%). The technique of varying the base pressure (impurity level) yielded a 10% decrease in stress with a 2% decrease in reflectance for our multilayers. Post-deposition annealing was performed and it was observed that while the cost in reflectance is relatively high (3.5%) to bring the stress to near zero levels (i.e., reduce by 1 00%), the stress can be reduced by 75% with only a 1.3% drop in reflectivity at annealing temperatures near 200{degrees}C. A study of annealing during Mo/Si deposition was also performed; however, no practical advantage was observed by heating during deposition. A new non-thermal (athermal) buffer-layer technique was developed to compensate for the effects of stress. Using this technique with amorphous silicon and Mo/Be buffer-layers it was possible to obtain Mo/Be and Mo/Si multilayer films with a near zero net film stress and less than a 1% loss in reflectivity. For example a Mo/Be film with 68.7% reflectivity at 11.4 nm and a Mo/Si film with 66.5% reflectivity at 13.3 nm were produced with net stress values less than 30 MPa.

  8. Advances in biotreatment of acid mine drainage and biorecovery of metals: 2. Membrane bioreactor system for sulfate reduction.

    PubMed

    Tabak, Henry H; Govind, Rakesh

    2003-12-01

    Several biotreatmemt techniques for sulfate conversion by the sulfate reducing bacteria (SRB) have been proposed in the past, however few of them have been practically applied to treat sulfate containing acid mine drainage (AMD). This research deals with development of an innovative polypropylene hollow fiber membrane bioreactor system for the treatment of acid mine water from the Berkeley Pit, Butte, MT, using hydrogen consuming SRB biofilms. The advantages of using the membrane bioreactor over the conventional tall liquid phase sparged gas bioreactor systems are: large microporous membrane surface to the liquid phase; formation of hydrogen sulfide outside the membrane, preventing the mixing with the pressurized hydrogen gas inside the membrane; no requirement of gas recycle compressor; membrane surface is suitable for immobilization of active SRB, resulting in the formation of biofilms, thus preventing washout problems associated with suspended culture reactors; and lower operating costs in membrane bioreactors, eliminating gas recompression and gas recycle costs. Information is provided on sulfate reduction rate studies and on biokinetic tests with suspended SRB in anaerobic digester sludge and sediment master culture reactors and with SRB biofilms in bench-scale SRB membrane bioreactors. Biokinetic parameters have been determined using biokinetic models for the master culture and membrane bioreactor systems. Data are presented on the effect of acid mine water sulfate loading at 25, 50, 75 and 100 ml/min in scale-up SRB membrane units, under varied temperatures (25, 35 and 40 degrees C) to determine and optimize sulfate conversions for an effective AMD biotreatment. Pilot-scale studies have generated data on the effect of flow rates of acid mine water (MGD) and varied inlet sulfate concentrations in the influents on the resultant outlet sulfate concentration in the effluents and on the number of SRB membrane modules needed for the desired sulfate conversion in

  9. Evaluation of climate modeling factors impacting the variance of streamflow

    NASA Astrophysics Data System (ADS)

    Al Aamery, N.; Fox, J. F.; Snyder, M.

    2016-11-01

    The present contribution quantifies the relative importance of climate modeling factors and chosen response variables upon controlling the variance of streamflow forecasted with global climate model (GCM) projections, which has not been attempted in previous literature to our knowledge. We designed an experiment that varied climate modeling factors, including GCM type, project phase, emission scenario, downscaling method, and bias correction. The streamflow response variable was also varied and included forecasted streamflow and difference in forecast and hindcast streamflow predictions. GCM results and the Soil Water Assessment Tool (SWAT) were used to predict streamflow for a wet, temperate watershed in central Kentucky USA. After calibrating the streamflow model, 112 climate realizations were simulated within the streamflow model and then analyzed on a monthly basis using analysis of variance. Analysis of variance results indicate that the difference in forecast and hindcast streamflow predictions is a function of GCM type, climate model project phase, and downscaling approach. The prediction of forecasted streamflow is a function of GCM type, project phase, downscaling method, emission scenario, and bias correction method. The results indicate the relative importance of the five climate modeling factors when designing streamflow prediction ensembles and quantify the reduction in uncertainty associated with coupling the climate results with the hydrologic model when subtracting the hindcast simulations. Thereafter, analysis of streamflow prediction ensembles with different numbers of realizations show that use of all available realizations is unneeded for the study system, so long as the ensemble design is well balanced. After accounting for the factors controlling streamflow variance, results show that predicted average monthly change in streamflow tends to follow precipitation changes and result in a net increase in the average annual precipitation and

  10. A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance

    NASA Technical Reports Server (NTRS)

    Weiss, Marc A.; Greenhall, Charles A.

    1996-01-01

    An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.

  11. A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance

    NASA Technical Reports Server (NTRS)

    Weiss, Marc A.; Greenhall, Charles A.

    1996-01-01

    An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.

  12. 10 CFR 1021.343 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document,...

  13. Regression calibration with heteroscedastic error variance.

    PubMed

    Spiegelman, Donna; Logan, Roger; Grove, Douglas

    2011-01-01

    The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.

  14. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the study was conducted in compliance with the good laboratory practice regulations set forth in part... application for variance shall include the following information: (i) A description of the product and its... equipment, the proposed location of each unit. (viii) Such other information required by regulation or...

  15. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under... primary drinking water regulations. (c) For any request made under § 142.40(a): (1) Explanation in full... alternative raw water source or improvement of existing raw water source will be completed. (ii) Date of...

  16. Formative Use of Intuitive Analysis of Variance

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…

  17. Generalized Variance Function Applications in Forestry

    Treesearch

    James Alegria; Charles T. Scott; Charles T. Scott

    1991-01-01

    Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...

  18. 10 CFR 1021.343 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document, including...

  19. 10 CFR 1021.343 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document, including...

  20. 10 CFR 1021.343 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document, including...

  1. 10 CFR 1021.343 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document, including...

  2. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...

  3. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR...

  4. Multiple Comparison Procedures when Population Variances Differ.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Lee, JaeShin

    A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…

  5. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Dockets Management, except for information regarded as confidential under section 537(e) of the act. (d... Management (HFA-305), Food and Drug Administration, 5630 Fishers Lane, rm. 1061, Rockville, MD 20852. (1) The application for variance shall include the following information: (i) A description of the product and...

  6. Understanding gender variance in children and adolescents.

    PubMed

    Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A

    2014-06-01

    Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support. Copyright 2014, SLACK Incorporated.

  7. Variance approximations for assessments of classification accuracy

    Treesearch

    R. L. Czaplewski

    1994-01-01

    Variance approximations are derived for the weighted and unweighted kappa statistics, the conditional kappa statistic, and conditional probabilities. These statistics are useful to assess classification accuracy, such as accuracy of remotely sensed classifications in thematic maps when compared to a sample of reference classifications made in the field. Published...

  8. Parameterization of Incident and Infragravity Swash Variance

    NASA Astrophysics Data System (ADS)

    Stockdon, H. F.; Holman, R. A.; Sallenger, A. H.

    2002-12-01

    By clearly defining the forcing and morphologic controls of swash variance in both the incident and infragravity frequency bands, we are able to derive a more complete parameterization for extreme runup that may be applicable to a wide range of beach and wave conditions. It is expected that the dynamics of the incident and infragravity bands will have different dependencies on offshore wave conditions and local beach slopes. For example, previous studies have shown that swash variance in the incident band depends on foreshore beach slope while the infragravity variance depends more on a weighted mean slope across the surf zone. Because the physics of each band is parameterized differently, the amount that each frequency band contributes to the total swash variance will vary from site to site and, often, at a single site as the profile configuration changes over time. Using water level time series (measured at the shoreline) collected during nine dynamically different field experiments, we test the expected behavior of both incident and infragravity swash and the contribution each makes to total variance. At the dissipative sites (Iribarren number, \\xi0, <0.3) located in Oregon and the Netherlands, the incident band swash is saturated with respect to offshore wave height. Conversely, on the intermediate and reflective beaches, the amplitudes of both incident and infragravity swash variance grow with increasing offshore wave height. While infragravity band swash at all sites appears to increase linearly with offshore wave height, the magnitudes of the response are somewhat greater on reflective beaches than on dissipative beaches. This means that for the same offshore wave conditions the swash on a steeper foreshore will be larger than that on a more gently sloping foreshore. The potential control of the surf zone slope on infragravity band swash is examined at Duck, North Carolina, (0.3 < \\xi0 < 4.0), where significant differences in the relationship between swash

  9. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...

  10. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...

  11. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, second quarter 1994, April 1994--June 1994

    SciTech Connect

    1995-09-01

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NOx combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NOx reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NOx burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NOx reductions of each technology and evaluate the effects of those reductions on other combustion parameters. Results are described.

  12. Dose and volume reduction for normal lung using intensity-modulated radiotherapy for advanced-stage non-small-cell lung cancer.

    PubMed

    Murshed, Hasan; Liu, H Helen; Liao, Zhongxing; Barker, Jerry L; Wang, Xiaochun; Tucker, Susan L; Chandra, Anurag; Guerrero, Thomas; Stevens, Craig; Chang, Joe Y; Jeter, Melinda; Cox, James D; Komaki, Ritsuko; Mohan, Radhe; Change, Joe Y

    2004-03-15

    To investigate dosimetric improvements with respect to tumor-dose conformity and normal tissue sparing using intensity-modulated radiotherapy (IMRT) compared with three-dimensional conformal radiotherapy (3D-CRT) for advanced-stage non-small-cell lung cancer (NSCLC). Forty-one patients with Stage III-IV and recurrent NSCLC who previously underwent 3D-CRT were included. IMRT plans were designed to deliver 63 Gy to 95% of the planning target volume using nine equidistant coplanar 6-MV beams. Inverse planning was performed to minimize the volumes of normal lung, heart, esophagus, and spinal cord irradiated above their tolerance doses. Dose distributions and dosimetric indexes for the tumors and critical structures in both plans were computed and compared. Using IMRT, the median absolute reduction in the percentage of lung volume irradiated to >10 and >20 Gy was 7% and 10%, respectively. This corresponded to a decrease of >2 Gy in the total lung mean dose and of 10% in the risk of radiation pneumonitis. The volumes of the heart and esophagus irradiated to >40-50 Gy and normal thoracic tissue volume irradiated to >10-40 Gy were reduced using the IMRT plans. A marginal increase occurred in the spinal cord maximal dose and lung volume >5 Gy in the IMRT plans, which could be have resulted from the significant increase in monitor units and thus leakage dose in IMRT. IMRT planning significantly improved target coverage and reduced the volume of normal lung irradiated above low doses. The spread of low doses to normal tissues can be controlled in IMRT with appropriately selected planning parameters. The dosimetric benefits of IMRT for advanced-stage non-small-cell lung cancer must be evaluated further in clinical trials.

  13. Effectiveness of Losartan-Loaded Hyaluronic Acid (HA) Micelles for the Reduction of Advanced Hepatic Fibrosis in C3H/HeN Mice Model.

    PubMed

    Thomas, Reju George; Moon, Myeong Ju; Kim, Jo Heon; Lee, Jae Hyuk; Jeong, Yong Yeon

    2015-01-01

    Advanced hepatic fibrosis therapy using drug-delivering nanoparticles is a relatively unexplored area. Angiotensin type 1 (AT1) receptor blockers such as losartan can be delivered to hepatic stellate cells (HSC), blocking their activation and thereby reducing fibrosis progression in the liver. In our study, we analyzed the possibility of utilizing drug-loaded vehicles such as hyaluronic acid (HA) micelles carrying losartan to attenuate HSC activation. Losartan, which exhibits inherent lipophilicity, was loaded into the hydrophobic core of HA micelles with a 19.5% drug loading efficiency. An advanced liver fibrosis model was developed using C3H/HeN mice subjected to 20 weeks of prolonged TAA/ethanol weight-adapted treatment. The cytocompatibility and cell uptake profile of losartan-HA micelles were studied in murine fibroblast cells (NIH3T3), human hepatic stellate cells (hHSC) and FL83B cells (hepatocyte cell line). The ability of these nanoparticles to attenuate HSC activation was studied in activated HSC cells based on alpha smooth muscle actin (α-sma) expression. Mice treated with oral losartan or losartan-HA micelles were analyzed for serum enzyme levels (ALT/AST, CK and LDH) and collagen deposition (hydroxyproline levels) in the liver. The accumulation of HA micelles was observed in fibrotic livers, which suggests increased delivery of losartan compared to normal livers and specific uptake by HSC. Active reduction of α-sma was observed in hHSC and the liver sections of losartan-HA micelle-treated mice. The serum enzyme levels and collagen deposition of losartan-HA micelle-treated mice was reduced significantly compared to the oral losartan group. Losartan-HA micelles demonstrated significant attenuation of hepatic fibrosis via an HSC-targeting mechanism in our in vitro and in vivo studies. These nanoparticles can be considered as an alternative therapy for liver fibrosis.

  14. Analysis of variance of microarray data.

    PubMed

    Ayroles, Julien F; Gibson, Greg

    2006-01-01

    Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.

  15. Analysis of Variance of Multiply Imputed Data.

    PubMed

    van Ginkel, Joost R; Kroonenberg, Pieter M

    2014-01-01

    As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.

  16. Stress Variances Among Informal Hospice Caregivers

    PubMed Central

    Wittenberg-Lyles, Elaine; Demiris, George; Oliver, Debra Parker; Washington, Karla; Burt, Stephanie; Shaunfield, Sara

    2013-01-01

    Care interventions are not routinely provided for hospice caregivers, despite widespread documentation of the burden and toll of the caregiving experience. Assessing caregivers for team interventions (ACT) proposes that holistic patient and family care includes ongoing caregiver needs assessment of primary, secondary, and intrapsychic stressors. In this study, our goal was to describe the variance in stressors for caregivers to establish evidence for the ACT theoretical framework. We used secondary interview data from a randomized controlled trial to analyze hospice caregiver discussions about concerns. We found variances in stress types, suggesting that caregiver interventions should range from knowledge and skill building to cognitive-behavioral interventions that aid in coping. Family members who assume the role of primary caregiver for a dying loved one need to be routinely assessed by hospice providers for customized interventions. PMID:22673093

  17. Analysis of Variance of Multiply Imputed Data

    PubMed Central

    van Ginkel, Joost R.; Kroonenberg, Pieter M.

    2014-01-01

    As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values. PMID:24860197

  18. Variance and covariance of accumulated displacement estimates.

    PubMed

    Bayer, Matthew; Hall, Timothy J

    2013-04-01

    Tracking large deformations in tissue using ultrasound can enable the reconstruction of nonlinear elastic parameters, but poses a challenge to displacement estimation algorithms. Such large deformations have to be broken up into steps, each of which contributes an estimation error to the final accumulated displacement map. The work reported here measured the error variance for single-step and accumulated displacement estimates using one-dimensional numerical simulations of ultrasound echo signals, subjected to tissue strain and electronic noise. The covariance between accumulation steps was also computed. These simulations show that errors due to electronic noise are negatively correlated between steps, and therefore accumulate slowly, whereas errors due to tissue deformation are positively correlated and accumulate quickly. For reasonably low electronic noise levels, the error variance in the accumulated displacement estimates is remarkably constant as a function of step size, but increases with the length of the tracking kernel.

  19. Variance and Covariance of Accumulated Displacement Estimates

    PubMed Central

    Bayer, Matthew; Hall, Timothy J.

    2013-01-01

    Tracking large deformations in tissue using ultrasound can enable the reconstruction of nonlinear elastic parameters, but poses a challenge to displacement estimation algorithms. Such large deformations have to be broken up into steps, each of which contributes an estimation error to the final accumulated displacement map. The work reported here measured the error variance for single-step and accumulated displacement estimates using one-dimensional numerical simulations of ultrasound echo signals, subjected to tissue strain and electronic noise. The covariance between accumulation steps was also computed. These simulations show that errors due to electronic noise are negatively correlated between steps, and therefore accumulate slowly, while errors due to tissue deformation are positively correlated and accumulate quickly. For reasonably low electronic noise levels, the error variance in the accumulated displacement estimates is remarkably constant as a function of step size, but increases with the length of the tracking kernel. PMID:23493610

  20. Linear transformations of variance/covariance matrices.

    PubMed

    Parois, Pascal; Lutz, Martin

    2011-07-01

    Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance matrix. For the transformation of second-rank tensors it is suggested that the 3 × 3 matrix is re-written into a 9 × 1 vector. The transformation of the corresponding variance/covariance matrix is then straightforward and easily implemented into computer software. This method is applied in the transformation of anisotropic displacement parameters, the calculation of equivalent isotropic displacement parameters, the comparison of refinements in different space-group settings and the calculation of standard uncertainties of eigenvalues.

  1. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  2. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  3. The Theory of Variances in Equilibrium Reconstruction

    SciTech Connect

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  4. Variance and skewness in the FIRST survey

    NASA Astrophysics Data System (ADS)

    Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.

    1998-10-01

    We investigate the large-scale clustering of radio sources in the FIRST 1.4-GHz survey by analysing the distribution function (counts in cells). We select a reliable sample from the the FIRST catalogue, paying particular attention to the problem of how to define single radio sources from the multiple components listed. We also consider the incompleteness of the catalogue. We estimate the angular two-point correlation function w(theta), the variance Psi_2 and skewness Psi_3 of the distribution for the various subsamples chosen on different criteria. Both w(theta) and Psi_2 show power-law behaviour with an amplitude corresponding to a spatial correlation length of r_0~10h^-1Mpc. We detect significant skewness in the distribution, the first such detection in radio surveys. This skewness is found to be related to the variance through Psi_3=S_3(Psi_2)^alpha, with alpha=1.9+/-0.1, consistent with the non-linear gravitational growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of variance and the skewness are consistent with realistic models of galaxy clustering.

  5. Directional variance analysis of annual rings

    NASA Astrophysics Data System (ADS)

    Kumpulainen, P.; Marjanen, K.

    2010-07-01

    The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.

  6. Hypothesis exploration with visualization of variance

    PubMed Central

    2014-01-01

    Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666

  7. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  8. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    NASA Astrophysics Data System (ADS)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  9. Mindfulness-Based Stress Reduction in Advanced Nursing Practice: A Nonpharmacologic Approach to Health Promotion, Chronic Disease Management, and Symptom Control.

    PubMed

    Williams, Hants; Simmons, Leigh Ann; Tanabe, Paula

    2015-09-01

    The aim of this article is to discuss how advanced practice nurses (APNs) can incorporate mindfulness-based stress reduction (MBSR) as a nonpharmacologic clinical tool in their practice. Over the last 30 years, patients and providers have increasingly used complementary and holistic therapies for the nonpharmacologic management of acute and chronic diseases. Mindfulness-based interventions, specifically MBSR, have been tested and applied within a variety of patient populations. There is strong evidence to support that the use of MBSR can improve a range of biological and psychological outcomes in a variety of medical illnesses, including acute and chronic pain, hypertension, and disease prevention. This article will review the many ways APNs can incorporate MBSR approaches for health promotion and disease/symptom management into their practice. We conclude with a discussion of how nurses can obtain training and certification in MBSR. Given the significant and growing literature supporting the use of MBSR in the prevention and treatment of chronic disease, increased attention on how APNs can incorporate MBSR into clinical practice is necessary. © The Author(s) 2015.

  10. Derivation of the Data Reduction Equations for the Calibration of the Six-component Thrust Stand in the CE-22 Advanced Nozzle Test Facility

    NASA Technical Reports Server (NTRS)

    Wong, Kin C.

    2003-01-01

    This paper documents the derivation of the data reduction equations for the calibration of the six-component thrust stand located in the CE-22 Advanced Nozzle Test Facility. The purpose of the calibration is to determine the first-order interactions between the axial, lateral, and vertical load cells (second-order interactions are assumed to be negligible). In an ideal system, the measurements made by the thrust stand along the three coordinate axes should be independent. For example, when a test article applies an axial force on the thrust stand, the axial load cells should measure the full magnitude of the force, while the off-axis load cells (lateral and vertical) should read zero. Likewise, if a lateral force is applied, the lateral load cells should measure the entire force, while the axial and vertical load cells should read zero. However, in real-world systems, there may be interactions between the load cells. Through proper design of the thrust stand, these interactions can be minimized, but are hard to eliminate entirely. Therefore, the purpose of the thrust stand calibration is to account for these interactions, so that necessary corrections can be made during testing. These corrections can be expressed in the form of an interaction matrix, and this paper shows the derivation of the equations used to obtain the coefficients in this matrix.

  11. Enhanced nitrogen and phosphorus removal by an advanced simultaneous sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal wastewater treatment process.

    PubMed

    Yan, Peng; Guo, Jin-Song; Wang, Jing; Chen, You-Peng; Ji, Fang-Ying; Dong, Yang; Zhang, Hong; Ouyang, Wen-juan

    2015-05-01

    An advanced wastewater treatment process (SIPER) was developed to simultaneously decrease sludge production, prevent the accumulation of inorganic solids, recover phosphorus, and enhance nutrient removal. The feasibility of simultaneous enhanced nutrient removal along with sludge reduction as well as the potential for enhanced nutrient removal via this process were further evaluated. The results showed that the denitrification potential of the supernatant of alkaline-treated sludge was higher than that of the influent. The system COD and VFA were increased by 23.0% and 68.2%, respectively, after the return of alkaline-treated sludge as an internal C-source, and the internal C-source contributed 24.1% of the total C-source. A total of 74.5% of phosphorus from wastewater was recovered as a usable chemical crystalline product. The nitrogen and phosphorus removal were improved by 19.6% and 23.6%, respectively, after incorporation of the side-stream system. Sludge minimization and excellent nutrient removal were successfully coupled in the SIPER process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Perspectives on beam-shaping optimization for thermal-noise reduction in advanced gravitational-wave interferometric detectors: Bounds, profiles, and critical parameters

    NASA Astrophysics Data System (ADS)

    Pierro, Vincenzo; Galdi, Vincenzo; Castaldi, Giuseppe; Pinto, Innocenzo M.; Agresti, Juri; Desalvo, Riccardo

    2007-12-01

    Suitable shaping (in particular, flattening and broadening) of the laser beam has recently been proposed as an effective device to reduce internal (mirror) thermal noise in advanced gravitational-wave interferometric detectors. Based on some recently published analytic approximations (valid in the infinite-test-mass limit) for the Brownian and thermoelastic mirror noises in the presence of arbitrary-shaped beams, this paper addresses certain preliminary issues related to the optimal beam-shaping problem. In particular, with specific reference to the Laser Interferometer Gravitational-wave Observatory (LIGO) experiment, absolute and realistic lower bounds for the various thermal-noise constituents are obtained and compared with the current status (Gaussian beams) and trends (mesa beams), indicating fairly ample margins for further reduction. In this framework, the effective dimension of the related optimization problem, and its relationship to the critical design parameters are identified, physical-feasibility and model-consistency issues are considered, and possible additional requirements and/or prior information exploitable to drive the subsequent optimization process are highlighted.

  13. Visual SLAM Using Variance Grid Maps

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  14. The defect variance of random spherical harmonics

    NASA Astrophysics Data System (ADS)

    Marinucci, Domenico; Wigman, Igor

    2011-09-01

    The defect of a function f:M\\rightarrow {R} is defined as the difference between the measure of the positive and negative regions. In this paper, we begin the analysis of the distribution of defect of random Gaussian spherical harmonics. By an easy argument, the defect is non-trivial only for even degree and the expected value always vanishes. Our principal result is evaluating the defect variance, asymptotically in the high-frequency limit. As other geometric functionals of random eigenfunctions, the defect may be used as a tool to probe the statistical properties of spherical random fields, a topic of great interest for modern cosmological data analysis.

  15. The theory of variances in equilibrium reconstruction

    SciTech Connect

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, D. C.

    2008-09-15

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma characteristics can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The {sigma}-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature and for improvements of numerical algorithms used in reconstruction.

  16. Variance and Skewness in the FIRST Survey

    NASA Astrophysics Data System (ADS)

    Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.

    We investigate the large-scale clustering of radio sources by analysing the distribution function of the FIRST 1.4 GHz survey. We select a reliable galaxy sample from the FIRST catalogue, paying particular attention to the definition of single radio sources from the multiple components listed in the FIRST catalogue. We estimate the variance, Ψ2, and skewness, Ψ3, of the distribution function for the best galaxy subsample. Ψ2 shows power-law behaviour as a function of cell size, with an amplitude corresponding a spatial correlation length of r0 ~10 h-1 Mpc. We detect significant skewness in the distribution, and find that it is related to the variance through the relation Ψ3 = S3 (Ψ2)α with α = 1.9 +/- 0.1 consistent with the non-linear growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of clustering (corresponding to a spatial correlation length of r0 ~10 h-1 Mpc) and skewness are consistent with realistic models of galaxy clustering.

  17. Multivariate Granger causality and generalized variance.

    PubMed

    Barrett, Adam B; Barnett, Lionel; Seth, Anil K

    2010-04-01

    Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or "ensembles" of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke's seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define "partial" Granger causality in the multivariate context and we also motivate reformulations of "causal density" and "Granger autonomy." Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.

  18. Multivariate Granger causality and generalized variance

    NASA Astrophysics Data System (ADS)

    Barrett, Adam B.; Barnett, Lionel; Seth, Anil K.

    2010-04-01

    Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or “ensembles” of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke’s seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define “partial” Granger causality in the multivariate context and we also motivate reformulations of “causal density” and “Granger autonomy.” Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.

  19. River meanders - Theory of minimum variance

    USGS Publications Warehouse

    Langbein, Walter Basil; Leopold, Luna Bergere

    1966-01-01

    Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.

  20. Fabricating Pt/Sn-In2O3 Nanoflower with Advanced Oxygen Reduction Reaction Performance for High-Sensitivity MicroRNA Electrochemical Detection.

    PubMed

    Zhang, Kai; Dong, Haifeng; Dai, Wenhao; Meng, Xiangdan; Lu, Huiting; Wu, Tingting; Zhang, Xueji

    2017-01-03

    Herein, an efficient electrochemical tracer with advanced oxygen reduction reaction (ORR) performance was designed by controllably decorating platinum (Pt) (diameter, 1 nm) on the surface of compositionally tunable tin-doped indium oxide nanoparticle (Sn-In2O3) (diameter, 25 nm), and using the Pt/Sn-In2O3 as electrochemical tracer and interfacial term hairpin capture probe, a facile and ultrasensitive microRNA (miRNA) detection strategy was developed. The morphology and composition of the generated Pt/Sn-In2O3 NPs were comprehensively characterized by spectroscopic and microscopic measurements, indicating numerous Pt uniformly anchored on the surface of Sn-In2O3. The interaction between Pt and surface Sn as well as high Pt(111) exposure resulted in the excellent electrochemical catalytic ability and stability of the Pt/Sn-In2O3 ORR. As proof-of-principle, using streptavidin (SA) functionalized Pt/Sn-In2O3 (SA/Pt/Sn-In2O3) as electrochemical tracer to amplify the detectable signal and a interfacial term hairpin probe for target capture probe, a miRNA biosensor with a linear range from 5 pM to 0.5 fM and limit of detection (LOD) down to 1.92 fM was developed. Meanwhile, the inherent selectivity of the term hairpin capture probe endowed the biosensor with good base discrimination ability. The good feasibility for real sample detection was also demonstrated. The work paves a new avenue to fabricate and design high-effective electrocatalytic tracer, which have great promise in new bioanalytical applications.

  1. The influence of local spring temperature variance on temperature sensitivity of spring phenology.

    PubMed

    Wang, Tao; Ottlé, Catherine; Peng, Shushi; Janssens, Ivan A; Lin, Xin; Poulter, Benjamin; Yue, Chao; Ciais, Philippe

    2014-05-01

    The impact of climate warming on the advancement of plant spring phenology has been heavily investigated over the last decade and there exists great variability among plants in their phenological sensitivity to temperature. However, few studies have explicitly linked phenological sensitivity to local climate variance. Here, we set out to test the hypothesis that the strength of phenological sensitivity declines with increased local spring temperature variance, by synthesizing results across ground observations. We assemble ground-based long-term (20-50 years) spring phenology database (PEP725 database) and the corresponding climate dataset. We find a prevalent decline in the strength of phenological sensitivity with increasing local spring temperature variance at the species level from ground observations. It suggests that plants might be less likely to track climatic warming at locations with larger local spring temperature variance. This might be related to the possibility that the frost risk could be higher in a larger local spring temperature variance and plants adapt to avoid this risk by relying more on other cues (e.g., high chill requirements, photoperiod) for spring phenology, thus suppressing phenological responses to spring warming. This study illuminates that local spring temperature variance is an understudied source in the study of phenological sensitivity and highlight the necessity of incorporating this factor to improve the predictability of plant responses to anthropogenic climate change in future studies.

  2. The variance of the adjusted Rand index.

    PubMed

    Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence

    2016-06-01

    For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Applications of Variance Fractal Dimension: a Survey

    NASA Astrophysics Data System (ADS)

    Phinyomark, Angkoon; Phukpattaranont, Pornchai; Limsakul, Chusak

    2012-04-01

    Chaotic dynamical systems are pervasive in nature and can be shown to be deterministic through fractal analysis. There are numerous methods that can be used to estimate the fractal dimension. Among the usual fractal estimation methods, variance fractal dimension (VFD) is one of the most significant fractal analysis methods that can be implemented for real-time systems. The basic concept and theory of VFD are presented. Recent research and the development of several applications based on VFD are reviewed and explained in detail, such as biomedical signal processing and pattern recognition, speech communication, geophysical signal analysis, power systems and communication systems. The important parameters that need to be considered in computing the VFD are discussed, including the window size and the window increment of the feature, and the step size of the VFD. Directions for future research of VFD are also briefly outlined.

  4. Variance of vestibular-evoked myogenic potentials.

    PubMed

    Ochi, K; Ohashi, T; Nishino, H

    2001-03-01

    Vestibular-evoked myogenic potential (VEMP) has been thought to originate from sacculus. The variance of this potential and the effectiveness of the adjustments of pInII amplitudes using average muscle tonus of ipsilateral sternocleidomastoid muscle were evaluated. In addition, clinical application of VEMP was examined in patients with acoustic tumors (ATs) and vestibular neurolabyrinthitis (VNL). Prospective evaluation of the VEMP in 18 normal volunteers and 6 patients. Variance and left-right difference of each parameter, including pI latency, nII latency, pInII amplitude, and threshold, was analyzed. Input-output function of pInII amplitude was evaluated. Average muscle tonus was calculated in 20 ears and applied for adjustment of pInII amplitude. Sensitivity of each parameter of VEMP was examined in 3 patients with ATs and 3 patients with VNL. VEMP was present in all 36 ears of 18 control subjects. Thresholds of VEMP for normal subjects were 80 to 95 dB normal hearing level (nHL). The muscle tonus affected pInII amplitude significantly; however, no statistically significant improvement was observed in test-retest investigation after adjustment using muscle tonus. The threshold of the affected side was elevated compared with the non-affected side in all patients with ATs, whereas 2 of 3 patients showed normal pInII-ratio. One patient with VNL presented normal VEMP, whereas 2 patients presented no VEMP to the highest stimulus intensity. Interaural difference of thresholds might be the most useful parameters. Adjustment using average muscle tonus is not necessary when the subject is able to get sufficient muscle tonus.

  5. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted... with section 110(i) of the Clean Air Act, which prohibits any State or EPA from granting a variance...

  6. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted... with section 110(i) of the Clean Air Act, which prohibits any State or EPA from granting a variance...

  7. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted... with section 110(i) of the Clean Air Act, which prohibits any State or EPA from granting a variance...

  8. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted... with section 110(i) of the Clean Air Act, which prohibits any State or EPA from granting a variance...

  9. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...

  10. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...

  11. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  12. Hidden Item Variance in Multiple Mini-Interview Scores

    ERIC Educational Resources Information Center

    Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen

    2017-01-01

    The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…

  13. Considering Oil Production Variance as an Indicator of Peak Production

    DTIC Science & Technology

    2010-06-07

    Acquisition Cost ( IRAC ) Oil Prices. Source: Data used to construct graph acquired from the EIA (http://tonto.eia.doe.gov/country/timeline/oil_chronology.cfm...Acquisition Cost ( IRAC ). Production vs. Price – Variance Comparison Oil production variance and oil price variance have never been so far

  14. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances granted pursuant to this part shall have only future effect. In his discretion, the Assistant Secretary...

  15. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  16. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  17. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  18. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time...) of this section, the administrator may grant a variance for a specific remote facility if the agency...

  19. 75 FR 22424 - Avalotis Corp.; Grant of a Permanent Variance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-28

    ... Occupational Safety and Health Administration Avalotis Corp.; Grant of a Permanent Variance AGENCY: Occupational Safety and Health Administration (OSHA), Department of Labor. ACTION: Notice of a grant of a permanent variance. SUMMARY: This notice announces the grant of a permanent variance to Avalotis Corp...

  20. A New Nonparametric Levene Test for Equal Variances

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  1. Degradation mechanisms and kinetic studies for the treatment of X-ray contrast media compounds by advanced oxidation/reduction processes.

    PubMed

    Jeong, Joonseon; Jung, Jinyoung; Cooper, William J; Song, Weihua

    2010-08-01

    The presence of iodinated X-ray contrast media compounds (ICM) in surface and ground waters has been reported. This is likely due to their biological inertness and incomplete removal in wastewater treatment processes. The present study reports partial degradation mechanisms based on elucidating the structures of major reaction by-products using gamma-irradiation and LC-MS. Studies conducted at concentrations higher than observed in natural waters is necessary to elucidate the reaction by-product structures and to develop destruction mechanisms. To support these mechanistic studies, the bimolecular rate constants for the reaction of OH and e(-)(aq) with one ionic ICM (diatrizoate), four non-ionic ICM (iohexol, iopromide, iopamidol, and iomeprol), and the several analogues of diatrizoate were determined. The absolute bimolecular reaction rate constants for diatrizoate, iohexol, iopromide, iopamidol, and iomeprol with OH were (9.58 +/- 0.23)x10(8), (3.20 +/- 0.13)x10(9), (3.34 +/- 0.14)x10(9), (3.42 +/- 0.28)x10(9), and (2.03 +/- 0.13) x 10(9) M(-1) s(-1), and with e(-)(aq) were (2.13 +/- 0.03)x10(10), (3.35 +/- 0.03)x10(10), (3.25 +/- 0.05)x10(10), (3.37 +/- 0.05)x10(10), and (3.47 +/- 0.02) x 10(10) M(-1) s(-1), respectively. Transient spectra for the intermediates formed by the reaction of OH were also measured over the time period of 1-100 micros to better understand the stability of the radicals and for evaluation of reaction rate constants. Degradation efficiencies for the OH and e(-)(aq) reactions with the five ICM were determined using steady-state gamma-radiolysis. Collectively, these data will form the basis of kinetic models for application of advanced oxidation/reduction processes for treating water containing these compounds.

  2. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO[sub x]) emissions from coal-fired boilers

    SciTech Connect

    Smith, L.L.; Hooper, M.P. )

    1992-07-13

    This Phase 2 Test Report summarizes the testing activities and results for the second testing phase of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO[sub x]) emissions from coal-fired boilers. The second phase demonstrates the Advanced Overfire Air (AOFA) retrofit with existing Foster Wheeler (FWEC) burners. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO[sub x] combustion equipment through the collection and analysis of long-term emissions data supported by short-term characterization data. Ultimately a fifty percent NO[sub x] reduction target using combinations of combustion modifications has been established for this project.

  3. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Phase 2, Overfire air tests

    SciTech Connect

    Smith, L.L.; Hooper, M.P.

    1992-07-13

    This Phase 2 Test Report summarizes the testing activities and results for the second testing phase of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The second phase demonstrates the Advanced Overfire Air (AOFA) retrofit with existing Foster Wheeler (FWEC) burners. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data supported by short-term characterization data. Ultimately a fifty percent NO{sub x} reduction target using combinations of combustion modifications has been established for this project.

  4. Correcting an analysis of variance for clustering.

    PubMed

    Hedges, Larry V; Rhoads, Christopher H

    2011-02-01

    A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.

  5. Explaining variance in black carbon's aging timescale

    NASA Astrophysics Data System (ADS)

    Fierce, L.; Riemer, N.; Bond, T. C.

    2015-03-01

    The size and composition of particles containing black carbon (BC) are modified soon after emission by condensation of semivolatile substances and coagulation with other particles, known collectively as "aging" processes. Although this change in particle properties is widely recognized, the timescale for transformation is not well constrained. In this work, we simulated aerosol aging with the particle-resolved model PartMC-MOSAIC (Particle Monte Carlo - Model for Simulating Aerosol Interactions and Chemistry) and extracted aging timescales based on changes in particle cloud condensation nuclei (CCN). We simulated nearly 300 scenarios and, through a regression analysis, identified the key parameters driving the value of the aging timescale. We show that BC's aging timescale spans from hours to weeks, depending on the local environmental conditions and the characteristics of the fresh BC-containing particles. Although the simulations presented in this study included many processes and particle interactions, we show that 80% of the variance in the aging timescale is explained by only a few key parameters. The condensation aging timescale decreased with the flux of condensing aerosol and was shortest for the largest fresh particles, while the coagulation aging timescale decreased with the total number concentration of large (D >100 nm), CCN-active particles and was shortest for the smallest fresh particles. Therefore, both condensation and coagulation play important roles in aging, and their relative impact depends on the particle size range.

  6. Cyclostationary analysis with logarithmic variance stabilisation

    NASA Astrophysics Data System (ADS)

    Borghesani, Pietro; Shahriar, Md Rifat

    2016-03-01

    Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

  7. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Technical progress report, fourth quarter, 1994, October 1994--December 1994

    SciTech Connect

    1995-09-01

    This quarterly report discusses the technical progress of an innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NOx combustion equipment through the collection and analysis of long-term emissions data. The project provides a stepwise evaluation of the following NOx reduction technologies: Advanced overfire air (AOFA), Low NOx burners (LNB), LNB with AOFA, and Advanced Digital Controls and Optimization Strategies. The project has completed the baseline, AOFA, LNB, and LNB+AOFA test segments, fulfilling all testing originally proposed to DOE. Analysis of the LNB long-term data collected show the full load NOx emission levels to be near 0.65 lb/MBtu. This NOx level represents a 48 percent reduction when compared to the baseline, full load value of 1.24 lb/MBtu. These reductions were sustainable over the long-term test period and were consistent over the entire load range. Full load, fly ash LOI values in the LNB configuration were near 8 percent compared to 5 percent for baseline. Results from the LNB+AOFA phase indicate that full load NOx emissions are approximately 0.40 lb/MBtu with a corresponding fly ash LOI value of near 8 percent. Although this NOx level represents a 67 percent reduction from baseline levels, a substantial portion of the incremental change in NOx emissions between the LNB and LNB+AOFA configurations was the result of operational changes and not the result of the AOFA system. Phase 4 of the project is now underway.

  8. Genomic variance estimates: With or without disequilibrium covariances?

    PubMed

    Lehermeier, C; de Los Campos, G; Wimmer, V; Schön, C-C

    2017-06-01

    Whole-genome regression methods are often used for estimating genomic heritability: the proportion of phenotypic variance that can be explained by regression on marker genotypes. Recently, there has been an intensive debate on whether and how to account for the contribution of linkage disequilibrium (LD) to genomic variance. Here, we investigate two different methods for genomic variance estimation that differ in their ability to account for LD. By analysing flowering time in a data set on 1,057 fully sequenced Arabidopsis lines with strong evidence for diversifying selection, we observed a large contribution of covariances between quantitative trait loci (QTL) to the genomic variance. The classical estimate of genomic variance that ignores covariances underestimated the genomic variance in the data. The second method accounts for LD explicitly and leads to genomic variance estimates that when added to error variance estimates match the sample variance of phenotypes. This method also allows estimating the covariance between sets of markers when partitioning the genome into subunits. Large covariance estimates between the five Arabidopsis chromosomes indicated that the population structure in the data led to strong LD also between physically unlinked QTL. By consecutively removing population structure from the phenotypic variance using principal component analysis, we show how population structure affects the magnitude of LD contribution and the genomic variance estimates obtained with the two methods. © 2017 Blackwell Verlag GmbH.

  9. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Public design report (preliminary and final)

    SciTech Connect

    1996-07-01

    This Public Design Report presents the design criteria of a DOE Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of NO{sub x} emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 (500 MW) near Rome, Georgia. The technologies being demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NO{sub x} burner. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NO{sub x} burners, advanced overfire systems, and digital control system.

  10. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO[sub x]) emissions from coal-fired boilers

    SciTech Connect

    Not Available

    1992-08-24

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (No[sub x]) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO[sub x] combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO[sub x] reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO[sub x] burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO[sub x] reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.

  11. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, Second quarter 1992

    SciTech Connect

    Not Available

    1992-08-24

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (No{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.

  12. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO sub x ) emissions from coal-fired boilers

    SciTech Connect

    Not Available

    1992-02-03

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an Advanced Overfire Air (AOFA) system followed by Low NO{sub x} Burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.

  13. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, third quarter 1991

    SciTech Connect

    Not Available

    1992-02-03

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an Advanced Overfire Air (AOFA) system followed by Low NO{sub x} Burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.

  14. Gene set analysis using variance component tests.

    PubMed

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  15. Prediction error variances for interbreed genetic evaluations.

    PubMed

    Van Vleck, L D; Cundiff, L V

    1994-08-01

    A table for adjusting expected progeny differences (EPD) to a base year and breed basis depends on analyses of records of progeny of bulls of different breeds in a common environment and requires that those reference bulls also have other progeny to provide within-breed EPD. Currently, the germ plasm evaluation project at the Meat Animal Research Center (MARC) provides such a common environment for reference bulls of several breeds for estimation of breed differences for the reference sires. Reference sire estimates of breed differences are adjusted by the difference between average EPD of reference bulls and average EPD for the base year for that breed. Two related questions are as follows: 1) What are confidence ranges for the adjustments and 2) What are accuracies of interbreed EPD? Application of statistical principles and algebra shows that 1) apparent confidence ranges for breed adjustments are small, 2) apparent confidence ranges are substantially underestimated when random sire effects within breed are ignored, 3) correct confidence ranges also are small, 4) usual measures of accuracy cannot be applied to interbreed comparisons, and 5) standard errors of prediction used in calculating confidence ranges for interbreed comparisons are much less affected by variance of the adjustment factors than by within-breed accuracies for two bulls being compared except for bulls with accuracies of near unity. Alternatives of predicting differences between bulls of the same or different breeds or between a bull of any breed and an average bull of a base breed are discussed in terms of confidence ranges.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Gene set analysis using variance component tests

    PubMed Central

    2013-01-01

    Background Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. Results We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). Conclusion We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data. PMID:23806107

  17. Measuring past changes in ENSO variance using Mg/Ca measurements on individual planktic foraminifera

    NASA Astrophysics Data System (ADS)

    Marchitto, T. M.; Grist, H. R.; van Geen, A.

    2013-12-01

    Previous work in Soledad Basin, located off Baja California Sur in the eastern subtropical Pacific, supports a La Niña-like mean-state response to enhanced radiative forcing at both orbital and millennial (solar) timescales during the Holocene. Mg/Ca measurements on the planktic foraminifer Globigerina bulloides indicate cooling when insolation is higher, consistent with an ';ocean dynamical thermostat' response that shoals the thermocline and cools the surface in the eastern tropical Pacific. Some, but not all, numerical models simulate reduced ENSO variance (less frequent and/or less intense events) when the Pacific is driven into a La Niña-like mean state by radiative forcing. Hypothetically the question of ENSO variance can be examined by measuring individual planktic foraminiferal tests from within a sample interval. Koutavas et al. (2006) used d18O on single specimens of Globigerinoides ruber from the eastern equatorial Pacific to demonstrate a 50% reduction in variance at ~6 ka compared to ~2 ka, consistent with the sense of the model predictions at the orbital scale. Here we adapt this approach to Mg/Ca and apply it to the millennial-scale question. We present Mg/Ca measured on single specimens of G. bulloides (cold season) and G. ruber (warm season) from three time slices in Soledad Basin: the 20th century, the warm interval (and solar low) at 9.3 ka, and the cold interval (and solar high) at 9.8 ka. Each interval is uniformly sampled over a ~100-yr (~10-cm or more) window to ensure that our variance estimate is not biased by decadal-scale stochastic variability. Theoretically we can distinguish between changing ENSO variability and changing seasonality: a reduction in ENSO variance would result in narrowing of both the G. bulloides and G. ruber temperature distributions without necessarily changing the distance between their two medians; while a reduction in seasonality would cause the two species' distributions to move closer together.

  18. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  19. Escape from predators and genetic variance in birds.

    PubMed

    Jiang, Y; Møller, A P

    2017-09-12

    Predation is a common cause of death in numerous organisms, and a host of antipredator defences have evolved. Such defences often have a genetic background as shown by significant heritability and microevolutionary responses towards weaker defences in the absence of predators. Flight initiation distance (FID) is the distance at which an individual animal takes flight when approached by a human, and hence, it reflects the life-history compromise between risk of predation and the benefits of foraging. Here, we analysed FID in 128 species of birds in relation to three measures of genetic variation, band sharing coefficient for minisatellites, observed heterozygosity and inbreeding coefficient for microsatellites in order to test whether FID was positively correlated with genetic variation. We found consistently shorter FID for a given body size in the presence of high band sharing coefficients, low heterozygosity and high inbreeding coefficients in phylogenetic analyses after controlling statistically for potentially confounding variables. These findings imply that antipredator behaviour is related to genetic variance. We predict that many threatened species with low genetic variability will show reduced antipredator behaviour and that subsequent predator-induced reductions in abundance may contribute to unfavourable population trends for such species. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  20. Prediction of membrane protein types using maximum variance projection

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Yang, Jie

    2011-05-01

    Predicting membrane protein types has a positive influence on further biological function analysis. To quickly and efficiently annotate the type of an uncharacterized membrane protein is a challenge. In this work, a system based on maximum variance projection (MVP) is proposed to improve the prediction performance of membrane protein types. The feature extraction step is based on a hybridization representation approach by fusing Position-Specific Score Matrix composition. The protein sequences are quantized in a high-dimensional space using this representation strategy. Some problems will be brought when analysing these high-dimensional feature vectors such as high computing time and high classifier complexity. To solve this issue, MVP, a novel dimensionality reduction algorithm is introduced by extracting the essential features from the high-dimensional feature space. Then, a K-nearest neighbour classifier is employed to identify the types of membrane proteins based on their reduced low-dimensional features. As a result, the jackknife and independent dataset test success rates of this model reach 86.1 and 88.4%, respectively, and suggest that the proposed approach is very promising for predicting membrane proteins types.

  1. A fast minimum variance beamforming method using principal component analysis.

    PubMed

    Kim, Kyuhong; Park, Suhyun; Kim, Jungho; Park, Sung-Bae; Bae, MooHo

    2014-06-01

    Minimum variance (MV) beamforming has been studied for improving the performance of a diagnostic ultrasound imaging system. However, it is not easy for the MV beamforming to be implemented in a real-time ultrasound imaging system because of the enormous amount of computation time associated with the covariance matrix inversion. In this paper, to address this problem, we propose a new fast MV beamforming method that almost optimally approximates the MV beamforming while reducing the computational complexity greatly through dimensionality reduction using principal component analysis (PCA). The principal components are estimated offline from pre-calculated conventional MV weights. Thus, the proposed method does not directly calculate the MV weights but approximates them by a linear combination of a few selected dominant principal components. The combinational weights are calculated in almost the same way as in MV beamforming, but in the transformed domain of beamformer input signal by the PCA, where the dimension of the transformed covariance matrix is identical to the number of some selected principal component vectors. Both computer simulation and experiment were carried out to verify the effectiveness of the proposed method with echo signals from simulation as well as phantom and in vivo experiments. It is confirmed that our method can reduce the dimension of the covariance matrix down to as low as 2 × 2 while maintaining the good image quality of MV beamforming.

  2. If supply-oriented drug policy is broken, can harm reduction help fix it? Melding disciplines and methods to advance international drug-control policy.

    PubMed

    Greenfield, Victoria A; Paoli, Letizia

    2012-01-01

    Critics of the international drug-control regime contend that supply-oriented policy interventions are not just ineffective, but, in focusing almost exclusively on supply reduction, they also produce unintended adverse consequences. Evidence from the world heroin market supports their claims. The balance of the effects of policy is yet unknown, but the prospect of adverse consequences underlies a central paradox of contemporary supply-oriented policy. In this paper, we evaluate whether harm reduction, a subject of intense debate in the demand-oriented drug-policy community, can provide a unifying foundation for supply-oriented drug policy and speak more directly to policy goals. Our analysis rests on an extensive review of the literature on harm reduction and draws insight from other policy communities' disciplines and methods. First, we explore the paradoxes of supply-oriented policy that initially motivated our interest in harm reduction; second, we consider the conceptual and technical challenges that have contributed to the debate on harm reduction and assess their relevance to a supply-oriented application; third, we examine responses to those challenges, i.e., various tools (taxonomies, models, and measurement strategies), that can be used to identify, categorize, and assess harms. Despite substantial conceptual and technical challenges, we find that harm reduction can provide a basis for assessing the net consequences of supply-oriented drug policy, choosing more rigorously amongst policy options, and identifying new options. In addition, we outline a practical path forward for assessing harms and policy options. On the basis of our analysis, we suggest pursuing a harm-based approach and making a clearer distinction between supply-oriented and supply-reduction policy. Published by Elsevier B.V.

  3. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    SciTech Connect

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  4. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  5. Hartley-ross type variance estimators in simple random sampling

    NASA Astrophysics Data System (ADS)

    Kadilar, Cem; Cekim, Hatice Oncel

    2017-07-01

    In this article, we have improved some unbiased ratio type estimators for estimating the finite population variance of the study variable. The proposed estimators are developed with the aid of the variance estimators given by Upadhyaya and Singh [8] and Kadilar and Cingi [1] in simple random sampling. We have derived the expressions of the variance of the proposed estimators. We have obtained the comparison conditions for which the variance values of the proposed estimators are smaller than MSE values of estimators given in Upadhyaya and Singh [8] and Kadilar and Cingi [1]. The real data set taken from the literature is used for the numerical comparisons.

  6. Hartley-Ross type variance estimators in simple random sampling

    NASA Astrophysics Data System (ADS)

    Kadilar, Cem; Cekim, Hatice Oncel

    2017-07-01

    In this article, we have improved some unbiased ratio type estimators for estimating the finite population variance of the study variable. The proposed estimators are developed with the aid of the variance estimators given by Upadhyaya and Singh [8] and Kadilar and Cingi [4] in simple random sampling. We have derived the expressions of the variance of the proposed estimators. We have obtained the comparison conditions for which the variance values of the proposed estimators are smaller than MSE values of estimators given in Upadhyaya and Singh [8] and Kadilar and Cingi [4]. The real data set taken from the literature is used for the numerical comparisons.

  7. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    PubMed

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  8. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    NASA Astrophysics Data System (ADS)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  9. High-fidelity Simulation of Jet Noise from Rectangular Nozzles . [Large Eddy Simulation (LES) Model for Noise Reduction in Advanced Jet Engines and Automobiles

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj

    2014-01-01

    This Phase II project validated a state-of-the-art LES model, coupled with a Ffowcs Williams-Hawkings (FW-H) far-field acoustic solver, to support the development of advanced engine concepts. These concepts include innovative flow control strategies to attenuate jet noise emissions. The end-to-end LES/ FW-H noise prediction model was demonstrated and validated by applying it to rectangular nozzle designs with a high aspect ratio. The model also was validated against acoustic and flow-field data from a realistic jet-pylon experiment, thereby significantly advancing the state of the art for LES.

  10. Space Launch System (SLS) Program Overview NASA Research Announcement (NRA) Advanced Booster (AB) Engineering Demonstration and Risk Reduction (EDRR) Industry Day

    NASA Technical Reports Server (NTRS)

    May, Todd A.

    2011-01-01

    SLS is a national capability that empowers entirely new exploration for missions of national importance. Program key tenets are safety, affordability, and sustainability. SLS builds on a solid foundation of experience and current capacities to enable a timely initial capability and evolve to a flexible heavy-lift capability through competitive opportunities: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability and performance The road ahead promises to be an exciting journey for present and future generations, and we look forward to working with you to continue America fs space exploration.

  11. EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation

    SciTech Connect

    Fan, Xuetong

    2000-08-20

    Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.

  12. Analysis of variance of designed chromatographic data sets: The analysis of variance-target projection approach.

    PubMed

    Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata

    2015-07-31

    Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.

  13. Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System

    NASA Astrophysics Data System (ADS)

    Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.

    2016-06-01

    Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading

  14. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  15. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    SciTech Connect

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    2014-06-15

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment

  16. 40 CFR 124.62 - Decision on variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... different factors” from those on which an effluent limitations guideline was based; (2) A variance based upon certain water quality factors under CWA section 301(g). (f) The Administrator (or his delegate... CWA section 301(c); or (2) A variance based on water quality related effluent limitations under CWA...

  17. 40 CFR 124.62 - Decision on variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... different factors” from those on which an effluent limitations guideline was based; (2) A variance based upon certain water quality factors under CWA section 301(g). (f) The Administrator (or his delegate... CWA section 301(c); or (2) A variance based on water quality related effluent limitations under CWA...

  18. 40 CFR 124.62 - Decision on variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... different factors” from those on which an effluent limitations guideline was based; (2) A variance based upon certain water quality factors under CWA section 301(g). (f) The Administrator (or his delegate... CWA section 301(c); or (2) A variance based on water quality related effluent limitations under CWA...

  19. 40 CFR 124.62 - Decision on variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... different factors” from those on which an effluent limitations guideline was based; (2) A variance based upon certain water quality factors under CWA section 301(g). (f) The Administrator (or his delegate... CWA section 301(c); or (2) A variance based on water quality related effluent limitations under CWA...

  20. 40 CFR 124.62 - Decision on variances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... different factors” from those on which an effluent limitations guideline was based; (2) A variance based upon certain water quality factors under CWA section 301(g). (f) The Administrator (or his delegate... CWA section 301(c); or (2) A variance based on water quality related effluent limitations under CWA...

  1. 31 CFR 8.59 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of pleadings. 8.59 Section 8.59 Money and Finance: Treasury Office of the Secretary of the Treasury PRACTICE BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance...

  2. Gender Variance and Educational Psychology: Implications for Practice

    ERIC Educational Resources Information Center

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  3. Conceptual Complexity and the Bias/Variance Tradeoff

    ERIC Educational Resources Information Center

    Briscoe, Erica; Feldman, Jacob

    2011-01-01

    In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…

  4. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity

    PubMed Central

    Diaz, S Anaid; Viney, Mark

    2014-01-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species. PMID:25360248

  5. On the Endogeneity of the Mean-Variance Efficient Frontier.

    ERIC Educational Resources Information Center

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  6. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  7. The depletion of genetic variance by sexual selection.

    PubMed

    Van Homrigh, Anna; Higgie, Megan; McGuigan, Katrina; Blows, Mark W

    2007-03-20

    Sexually selected traits display substantial genetic variance [1, 2], in conflict with the expectation that sexual selection will deplete it [3-5]. Condition dependence is thought to resolve this paradox [5-7], but experimental tests that relate the direction of sexual selection to the availability of genetic variance are lacking. Here, we show that condition-dependent expression is not sufficient to maintain genetic variance available to sexual selection in multiple male sexually selected traits. We employed an experimental design that simultaneously determined the quantitative genetic basis of nine male cuticular hydrocarbons (CHCs) of Drosophila bunnanda, the extent of condition dependence of these traits, and the strength and direction of sexual selection acting upon them. The CHCs of D. bunnanda are condition dependent, with 18% of the genetic variance in male body size explained by genetic variance in CHCs. Despite the presence of genetic variance in individual male traits, 98% of the genetic variance in CHCs was found to be orientated more than 88 degrees away from the direction of sexual selection and therefore unavailable to selection. A lack of genetic variance in male traits in the direction of sexual selection may represent a general feature of sexually selected systems, even in the presence of condition-dependent trait expression.

  8. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    PubMed

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  9. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  10. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 5 2011-07-01 2011-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS...

  11. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS...

  12. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time..., mental hospital, and ICF located within a 50-mile radius of the facility; (e) The distance and...

  13. Gender Variance and Educational Psychology: Implications for Practice

    ERIC Educational Resources Information Center

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  14. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a...

  15. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a...

  16. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a...

  17. Evaluation of Mean and Variance Integrals without Integration

    ERIC Educational Resources Information Center

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  18. Productive Failure in Learning the Concept of Variance

    ERIC Educational Resources Information Center

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  19. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  20. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  1. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  2. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  3. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  4. An Analysis of Variance Framework for Matrix Sampling.

    ERIC Educational Resources Information Center

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  5. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the Williams... the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from a standard... the Williams-Steiger Occupational Safety and Health Act of 1970. In accordance with the requirements...

  6. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time...'s satisfaction, that the remote facility continues to meet the requirements of §§ 456.521 through...

  7. Variances and Covariances of Kendall's Tau and Their Estimation.

    ERIC Educational Resources Information Center

    Cliff, Norman; Charlin, Ventura

    1991-01-01

    Variance formulas of H. E. Daniels and M. G. Kendall (1947) are generalized to allow for the presence of ties and variance of the sample tau correlation. Applications of these generalized formulas are discussed and illustrated using data from a 1965 study of contraceptive use in 15 developing countries. (SLD)

  8. Characterizing the evolution of genetic variance using genetic covariance tensors.

    PubMed

    Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W

    2009-06-12

    Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.

  9. Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.

  10. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... compliance plan proposed by the applicant can reasonably be implemented and will achieve compliance as...

  11. Innovative Clean Coal Technology (ICCT): 180 MW demonstration of advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, third quarter 1991

    SciTech Connect

    Not Available

    1992-02-03

    This quarterly report discusses the technical progress of a US Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) Project demonstrating advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from a coal-fired boiler. The project is being conducted at Gulf Power Company`s Plant Lansing Smith Unit 2 located near Panama City, Florida. The primary objective of this demonstration is to determine the long-term effects of commercially available tangentially-fired low NO{sub x} combustion technologies on NO{sub x} emissions and boiler performance. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project.

  12. Innovative Clean Coal Technology (ICCT): 180 MW demonstration of advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO sub x ) emissions from coal-fired boilers

    SciTech Connect

    Not Available

    1992-02-03

    This quarterly report discusses the technical progress of a US Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) Project demonstrating advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from a coal-fired boiler. The project is being conducted at Gulf Power Company's Plant Lansing Smith Unit 2 located near Panama City, Florida. The primary objective of this demonstration is to determine the long-term effects of commercially available tangentially-fired low NO{sub x} combustion technologies on NO{sub x} emissions and boiler performance. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project.

  13. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, second quarter 1995

    SciTech Connect

    1995-12-31

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. The project provides a stepwise evaluation of the following NO{sub x} reduction technologies: advanced overfire air (AOFA), low NO{sub x} burners (LNB), LNB with AOFA, and advanced digital controls and optimization strategies. The project has completed the baseline, AOFA, LNB, and LNB + AOFA test segments, fulfilling all testing originally proposed to DOE. Phase 4 of the project, demonstration of advanced control/optimization methodologies for NO{sub x} abatement, is now in progress. The methodology selected for demonstration at Hammond Unit 4 is the Generic NO{sub x} Control Intelligent System (GNOCIS), which is being developed by a consortium consisting of the Electric Power Research institute, PowerGen, Southern Company, Radian Corporation, U.K. Department of Trade and Industry, and US DOE. GNOCIS is a methodology that can result in improved boiler efficiency and reduced NO{sub x} emissions from fossil fuel fired boilers. Using a numerical model of the combustion process, GNOCIS applies an optimizing procedure to identify the best set points for the plant on a continuous basis. GNOCIS is designed to operate in either advisory or supervisory modes. Prototype testing of GNOCIS is in progress at Alabama Power`s Gaston Unit 4 and PowerGen`s Kingsnorth Unit 1.

  14. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, Fourth quarter 1992

    SciTech Connect

    Not Available

    1992-12-31

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x } reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. Baseline, AOFA, and LNB without AOFA test segments have been completed. Analysis of the 94 days of LNB tong-term data collected show the full load NO{sub x} emission levels to be approximately 0.65 lb/MBtu. Flyash LOI values for the LNB configuration are approximately 8 percent at full load. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. Abbreviated diagnostic tests for the LNB+AOFA configuration indicate that at 500 MWe, NO{sub x} emissions are approximately 0.55 lb/MBtu with corresponding flyash LOI values of approximately 11 percent. For comparison, the long-term, full load, baseline NO{sub x} emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing of the LNB+AOFA configuration will be performed when the stack particulate emissions issue is resolved.

  15. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  16. How Well Can We Estimate Error Variance of Satellite Precipitation Data Around the World?

    NASA Astrophysics Data System (ADS)

    Gebregiorgis, A. S.; Hossain, F.

    2014-12-01

    The traditional approach to measuring precipitation by placing a probe on the ground will likely never be adequate or affordable in most parts of the world. Fortunately, satellites today provide a continuous global bird's-eye view (above ground) at any given location. However, the usefulness of such precipitation products for hydrological applications depends on their error characteristics. Thus, providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating satellite precipitation error variance using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. The goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the errors variance of satellite precipitation products. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.

  17. Preliminary Evaluation of Preoperative Chemohormonotherapy-Induced Reduction of the Functional Infrared Imaging Score in Patients with Locally Advanced Breast Cancer

    DTIC Science & Technology

    2001-10-25

    preferably delivered within a clinical trial, is the current favored treatment strategy. PCT offers a number of advantages, including ensuring...improved drug access to the primary tumor site by avoiding surgical scarring , the possibility of complete or partial tumor reduction that could downsize the... mastectomy . In addition, there is sufficient data to suggest that the absence of any residual tumor cells in the surgical pathology specimen following PTC

  18. More to Astronomical Images than Meets the Eye: Data Dimension Reduction for Efficient Data Organization, Retrieval and Advanced Visualization and Analysis of Large Multitemporal/Multispectral Data Sets

    NASA Astrophysics Data System (ADS)

    Pesenson, Meyer; Pesenson, I.; Carey, S.; Roby, W.; McCollum, B.; Ingalls, J.; Ardila, D.; Teplitz, H.

    2009-01-01

    Effective analysis of large multispectral and multitemporal data sets demands new ways of data representation. We present applications of standard and original methods of data dimension reduction to astrophysical images (finding significant low-dimensional structures concealed in high-dimensional data). Such methods are widely used already outside of astronomy to effectively analyze large data sets. Data dimension reduction facilitates data organization, retrieval, and analysis (by improving statistical inference), which are crucial to multiwavelength astronomy, archival research, large-scale digital sky surveys and temporal astronomy. These methods allow a user to reduce a large number of FITS images, e.g. each representing a different wavelength, into a few images retaining more than 95% of the original visual information. An immediate simple application of this would be creating a multiwavelength "quick-look" image that includes all essential information in a statistically justified way, and thus is much more accurate than a "quick-look" made by simple coadding with an ad hoc, heuristic weighting. The dimensionally-reduced image is also naturally much smaller in file size in bytes than the total summed size of the non-dimensionally-reduced images. Thus dimensionally-reduced images offer an enormous savings in storage space and database-transmission bandwidth for the user. An analogous process of dimension reduction is possible for a large set of images obtained at the same wavelength but at different times (e.g. LSST images). Other applications of data dimension reduction include, but are not limited to, decorrelating data elements, removing noise, artifact separation, feature extraction, clustering and pattern classification in astronomical images. We demonstrate applications of the algorithms to test cases of current space-based IR data from the Spitzer Space Telescope.

  19. Spatio-temporal distribution of variance in upper ocean salinity

    NASA Astrophysics Data System (ADS)

    Maes, C.; O'Kane, T.; Monselesan, D. P.

    2016-12-01

    Despite recent advances in satellite sensors, it remains great uncertainty in the large-scale spatial variations of upper ocean salinity across seasonal through interannual to decadal timescales. Consonant with both broad-scale surface warming and the amplification of the global hydrological cycle, observed global multidecadal salinity changes typically have focused on the linear response to anthropogenic forcing, but not on salinity variations due to changes in the static stability and or variability due to the intrinsic ocean or internal climate processes. Here, we examine the static stability and spatiotemporal variability of upper ocean salinity across a hierarchy of models and reanalyses. In particular, we partition the variance into time bands via application of singular spectral analysis, considering sea surface salinity (SSS), the Brunt Vaisala frequency (N2), and ocean salinity stratification (OSS) in terms of the stabilizing effect due to the haline part of N2 over the upper 500m. We identify regions of significant coherent SSS variability, either intrinsic to the ocean or in response to the inter-annually varying atmosphere. In the tropics, the OSS contributes 40-50% to N2 as compared to the thermal part and exceeds it for a few months of the seasonal cycle. Away from the tropics, near the centers of action of the subtropical gyres, there are regions characterized by the permanent absence of OSS. Based on consistency across models (CMIP5 and forced experiments) and reanalyses, we identify the stabilizing role of salinity, and the role of salinity in destabilizing upper ocean stratification in the subtropical regions where large-scale density compensation typically occurs. Finally, a survey of the small scales of observed variability of the SSS field with typical range from 10 to 100 km, that has implications for the validation of satellite-based measurements characterized by a spatial footprint of 50-150 km, would be also discussed.

  20. Pilot-scale test of an advanced, integrated wastewater treatment process with sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal (SIPER).

    PubMed

    Yan, Peng; Ji, Fangying; Wang, Jing; Fan, Jianping; Guan, Wei; Chen, Qingkong

    2013-08-01

    Sludge reduction technologies are increasingly important in wastewater treatment, but have some defects. In order to remedy them, a novel, integrated process including sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal was developed. The pilot-scale system was operated steadily at a treatment scale of 10 m(3)/d for 90 days. The results showed excellent nutrient removal, with average removal efficiencies for NH4(+)-N, TN, TP, and COD reaching 98.2 ± 1.34%, 75.5 ± 3.46%, 95.3 ± 1.65%, and 92.7 ± 2.49%, respectively. The ratio of mixed liquor volatile suspended solids (MLVSS) to mixed liquor suspended solids (MLSS) in the system gradually increased, from 0.33 to 0.52. The process effectively prevented the accumulation of inert or inorganic solids in activated sludge. Phosphorus was recovered as a crystalline product with aluminum ion from wastewater. The observed sludge yield Yobs of the system was 0.103 gVSS/g COD, demonstrating that the system's sludge reduction potential is excellent.

  1. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  2. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  3. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  4. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  5. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  6. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  7. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  8. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities".

  9. Filtered kriging for spatial data with heterogeneous measurement error variances.

    PubMed

    Christensen, William F

    2011-09-01

    When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest. © 2011, The International Biometric Society.

  10. Effect of advanced aircraft noise reduction technology on the 1990 projected noise environment around Patrick Henry Airport. [development of noise exposure forecast contours for projected traffic volume and aircraft types

    NASA Technical Reports Server (NTRS)

    Cawthorn, J. M.; Brown, C. G.

    1974-01-01

    A study has been conducted of the future noise environment of Patric Henry Airport and its neighboring communities projected for the year 1990. An assessment was made of the impact of advanced noise reduction technologies which are currently being considered. These advanced technologies include a two-segment landing approach procedure and aircraft hardware modifications or retrofits which would add sound absorbent material in the nacelles of the engines or which would replace the present two- and three-stage fans with a single-stage fan of larger diameter. Noise Exposure Forecast (NEF) contours were computed for the baseline (nonretrofitted) aircraft for the projected traffic volume and fleet mix for the year 1990. These NEF contours are presented along with contours for a variety of retrofit options. Comparisons of the baseline with the noise reduction options are given in terms of total land area exposed to 30 and 40 NEF levels. Results are also presented of the effects on noise exposure area of the total number of daily operations.

  11. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  12. Variance Estimation for Myocardial Blood Flow by Dynamic PET.

    PubMed

    Moody, Jonathan B; Murthy, Venkatesh L; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P

    2015-11-01

    The estimation of myocardial blood flow (MBF) by (13)N-ammonia or (82)Rb dynamic PET typically relies on an empirically determined generalized Renkin-Crone equation to relate the kinetic parameter K1 to MBF. Because the Renkin-Crone equation defines MBF as an implicit function of K1, the MBF variance cannot be determined using standard error propagation techniques. To overcome this limitation, we derived novel analytical approximations that provide first- and second-order estimates of MBF variance in terms of the mean and variance of K1 and the Renkin-Crone parameters. The accuracy of the analytical expressions was validated by comparison with Monte Carlo simulations, and MBF variance was evaluated in clinical (82)Rb dynamic PET scans. For both (82)Rb and (13)N-ammonia, good agreement was observed between both (first- and second-order) analytical variance expressions and Monte Carlo simulations, with moderately better agreement for second-order estimates. The contribution of the Renkin-Crone relation to overall MBF uncertainty was found to be as high as 68% for (82)Rb and 35% for (13)N-ammonia. For clinical (82)Rb PET data, the conventional practice of neglecting the statistical uncertainty in the Renkin-Crone parameters resulted in underestimation of the coefficient of variation of global MBF and coronary flow reserve by 14-49%. Knowledge of MBF variance is essential for assessing the precision and reliability of MBF estimates. The form and statistical uncertainty in the empirical Renkin-Crone relation can make substantial contributions to the variance of MBF. The novel analytical variance expressions derived in this work enable direct estimation of MBF variance which includes this previously neglected contribution.

  13. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  14. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  15. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  16. Innovative clean coal technology: 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Final report, Phases 1 - 3B

    SciTech Connect

    1998-01-01

    This report presents the results of a U.S. Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project was conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The technologies demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NOx burner. The primary objective of the demonstration at Hammond Unit 4 was to determine the long-term effects of commercially available wall-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology were also performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications was established for the project. Short-term and long-term baseline testing was conducted in an {open_quotes}as-found{close_quotes} condition from November 1989 through March 1990. Following retrofit of the AOFA system during a four-week outage in spring 1990, the AOFA configuration was tested from August 1990 through March 1991. The FWEC CF/SF low NOx burners were then installed during a seven-week outage starting on March 8, 1991 and continuing to May 5, 1991. Following optimization of the LNBs and ancillary combustion equipment by FWEC personnel, LNB testing commenced during July 1991 and continued until January 1992. Testing in the LNB+AOFA configuration was completed during August 1993. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NOx burners and advanced overfire systems.

  17. Increased circulating VCAM-1 correlates with advanced disease and poor survival in patients with multiple myeloma: reduction by post-bortezomib and lenalidomide treatment

    PubMed Central

    Terpos, E; Migkou, M; Christoulas, D; Gavriatopoulou, M; Eleutherakis-Papaiakovou, E; Kanellias, N; Iakovaki, M; Panagiotidis, I; Ziogas, D C; Fotiou, D; Kastritis, E; Dimopoulos, M A

    2016-01-01

    Circulating vascular cell adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 (ICAM-1) and selectins were prospectively measured in 145 newly-diagnosed patients with symptomatic myeloma (NDMM), 61 patients with asymptomatic/smoldering myeloma (SMM), 47 with monoclonal gammopathy of undetermined significance (MGUS) and 87 multiple myeloma (MM) patients at first relapse who received lenalidomide- or bortezomib-based treatment (RD, n=47; or VD, n=40). Patients with NDMM had increased VCAM-1 and ICAM-1 compared with MGUS and SMM patients. Elevated VCAM-1 correlated with ISS-3 and was independently associated with inferior overall survival (OS) (45 months for patients with VCAM-1 >median vs 75 months, P=0.001). MM patients at first relapse had increased levels of ICAM-1 and L-selectin, even compared with NDMM patients and had increased levels of VCAM-1 compared with MGUS and SMM. Both VD and RD reduced dramatically serum VCAM-1 after four cycles of therapy, but only VD reduced serum ICAM-1, irrespective of response to therapy. The reduction of VCAM-1 was more pronounced after RD than after VD. Our study provides evidence for the prognostic value of VCAM-1 in myeloma patients, suggesting that VCAM-1 could be a suitable target for the development of anti-myeloma therapies. Furthermore, the reduction of VCAM-1 and ICAM-1 by RD and VD supports the inhibitory effect of these drugs on the adhesion of MM cells to stromal cells. PMID:27232930

  18. 180 MW demonstration of advanced tangentially-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Final report

    SciTech Connect

    Tavoulareas, E.S.; Hardman, R.; Eskinazi, D.; Smith, L.

    1994-02-01

    This report provides the key findings of the Innovative Clean Coal Technology (ICCT) demonstration project at Gulf Power`s Lansing Smith Unit No. 2 and the implications for other tangentially-fired boilers. L. Smith Unit No. 2 is a 180 MW tangentially-fired boiler burning Eastern Bituminous coal, which was retrofitted with Asea Brown Boveri/Combustion Engineering Services` (ABB/CE) LNCFS I, II, and III technologies. An extensive test program was carried-out with US Department of Energy, Southern Company and Electric Power Research Institute (EPRI) funding. The LNCFS I, II, and III achieved 37 percent, 37 percent, and 45 percent average long-term NO{sub x} emission reduction at full load, respectively (see following table). Similar NO{sub x} reduction was achieved within the control range (100--200 MW). However, below the control point (100 MW), NO{sub x} emissions with the LNCFS technologies increased significantly, reaching pre-retrofit levels at 70 MW. Short-term testing proved that low load NO{sub x} emissions could be reduced further by using lower excess O{sub 2} and burner tilt, but with adversed impacts on unit performance, such as lower steam outlet temperatures and, potentially, higher CO emissions and LOI.

  19. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report: First quarter 1993

    SciTech Connect

    Not Available

    1993-12-31

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. During this quarter, long-term testing of the LNB + AOFA configuration continued and no parametric testing was performed. Further full-load optimization of the LNB + AOFA system began on March 30, 1993. Following completion of this optimization, comprehensive testing in this configuration will be performed including diagnostic, performance, verification, long-term, and chemical emissions testing. These tests are scheduled to start in May 1993 and continue through August 1993. Preliminary engineering and procurement are progressing on the Advanced Low NOx Digital Controls scope addition to the wall-fired project. The primary activities during this quarter include (1) refinement of the input/output lists, (2) procurement of the distributed digital control system, (3) configuration training, and (4) revision of schedule to accommodate project approval cycle and change in unit outage dates.

  20. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Third quarterly technical progress report

    SciTech Connect

    Not Available

    1993-12-31

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. Baseline, AOFA, LNB, and LNB plus AOFA test segments have been completed. Analysis of the 94 days of LNB long-term data collected show the full-load NO{sub x} emission levels to be approximately 0.65 lb/MBtu with fly ash LOI values of approximately 8 percent. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. For comparison, the long-term full-load, baseline NO{sub x} emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing in the LNB+AOFA configuration indicate that at full-load, NO{sub x} emissions and fly ash LOI are near 0.40 lb/MBtu and 8 percent, respectively. However, it is believed that a substantial portion of the incremental change in NO{sub x} emissions between the LNB and LNB+AOFA configurations is the result of additional burner tuning and other operational adjustments and is not the result of the AOFA system. During this quarter, LNB+AOFA testing was concluded. Testing performed during this quarter included long-term and verification testing in the LNB+AOFA configuration.

  1. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances.

    PubMed

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-03-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene-gene and gene-environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity.

  2. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances

    PubMed Central

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-01-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533

  3. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance.

    PubMed

    Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin

    2009-02-09

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.

  4. Blinded sample size re-estimation in superiority and noninferiority trials: bias versus variance in variance estimation.

    PubMed

    Friede, Tim; Kieser, Meinhard

    2013-01-01

    The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re-estimation procedures have been proposed in the literature. We compare the blinded sample size re-estimation procedures based on the one-sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re-estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re-estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one-sample estimator and in turn the sample size resulting from the related re-estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one-sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application.

  5. Host nutrition alters the variance in parasite transmission potential.

    PubMed

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  6. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... maintenance of the distribution system. ...

  7. veqtl-mapper: variance association mapping for molecular phenotypes.

    PubMed

    Brown, Andrew Anand

    2017-09-01

    Genetic loci associated with the variance of phenotypic traits have been of recent interest as they can be signatures of genetic interactions, gene by environment interactions, parent of origin effects and canalization. We present a fast efficient tool to map loci affecting variance of gene expression and other molecular phenotypes in cis. Results: Applied to the publicly available Geuvadis gene expression dataset, we identify 816 loci associated with variance of gene expression using an additive model, and 32 showing differences in variance between homozygous and heterozygous alleles, signatures of parent of origin effects. Documentation and links to source code and binaries for linux can be found at https://funpopgen.github.io/veqm/ . andrew.brown@unige.ch. Supplementary data are available at Bioinformatics online.

  8. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  9. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  10. Variance comparisons for unbiased estimators of probability of correct classification

    NASA Technical Reports Server (NTRS)

    Moore, D. S.; Landgrebe, D. A.; Whitsitt, S. J.

    1976-01-01

    Variance relationships among certain count estimators and posterior probability estimators of probability of correct classification are investigated. An estimator using posterior probabilities is presented for use in stratified sampling designs. A test case involving three normal classes is examined.

  11. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    USDA-ARS?s Scientific Manuscript database

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  12. Estimating the generalized concordance correlation coefficient through variance components.

    PubMed

    Carrasco, Josep L; Jover, Lluís

    2003-12-01

    The intraclass correlation coefficient (ICC) and the concordance correlation coefficient (CCC) are two of the most popular measures of agreement for variables measured on a continuous scale. Here, we demonstrate that ICC and CCC are the same measure of agreement estimated in two ways: by the variance components procedure and by the moment method. We propose estimating the CCC using variance components of a mixed effects model, instead of the common method of moments. With the variance components approach, the CCC can easily be extended to more than two observers, and adjusted using confounding covariates, by incorporating them in the mixed model. A simulation study is carried out to compare the variance components approach with the moment method. The importance of adjusting by confounding covariates is illustrated with a case example.

  13. 40 CFR 142.42 - Consideration of a variance request.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... contaminant level required by the national primary drinking water regulations because of the nature of the raw... effectiveness of treatment methods for the contaminant for which the variance is requested. (2) Cost and other...

  14. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    DTIC Science & Technology

    2008-12-01

    Global Gravity Wave Variances from Aura MLS : Characteristics and Interpretation DONG L. WU Jet Propulsion Laboratory, California Institute of...stratosphere by the Microwave Limb Sounder ( MLS ) on the Aura satellite are investigated and initial results presented. Because the saturated (optically...orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5–8 times larger than those from the Upper Atmosphere

  15. Hidden item variance in multiple mini-interview scores.

    PubMed

    Zaidi, Nikki L Bibler; Swoboda, Christopher M; Kelcey, Benjamin M; Manuel, R Stephen

    2017-05-01

    The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation form. Due to its multi-faceted, repeated measures format, reliability for the MMI has been primarily evaluated using generalizability (G) theory. A key assumption of G theory is that G studies model the most important sources of variance to which a researcher plans to generalize. Because G studies can only attribute variance to the facets that are modeled in a G study, failure to model potentially substantial sources of variation in MMI scores can result in biased estimates of variance components. This study demonstrates the implications of hiding the item facet in MMI studies when true item-level effects exist. An extensive Monte Carlo simulation study was conducted to examine whether a commonly used hidden item, person-by-station (p × s|i) G study design results in biased estimated variance components. Estimates from this hidden item model were compared with estimates from a more complete person-by-station-by-item (p × s × i) model. Results suggest that when true item-level effects exist, the hidden item model (p × s|i) will result in biased variance components which can bias reliability estimates; therefore, researchers should consider using the more complete person-by-station-by-item model (p × s × i) when evaluating generalizability of MMI scores.

  16. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach

    NASA Astrophysics Data System (ADS)

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex

    2016-06-01

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to seven selected model parameters using a modified volatility basis-set (VBS) approach: four involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semivolatile and intermediate volatility organics (SIVOCs), and NOx; two involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the model parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether or not SOA that starts as semivolatile is rapidly transformed to nonvolatile SOA by particle-phase processes such as oligomerization and/or accretion, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into two subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to nonvolatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. However

  17. Why risk is not variance: an expository note.

    PubMed

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  18. The evolution and consequences of sex-specific reproductive variance.

    PubMed

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.

  19. The Evolution and Consequences of Sex-Specific Reproductive Variance

    PubMed Central

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130

  20. Orientation variance as a quantifier of structure in texture.

    PubMed

    Dakin, S C

    1999-01-01

    I consider how structure is derived from texture containing changes in orientation over space, and propose that multi-local orientation variance (the average orientation variance across a series of discrete images locales) is an estimate of the degree of organization that is useful both for spatial scale selection and for discriminating structure from noise. The oriented textures used in this paper are Glass patterns, which contain structure at a narrow range of scales. The effect of adding noise to Glass patterns, on a structure versus noise task (Maloney et al., 1987), is compared to discrimination based on orientation variance and template matching (i.e. having prior knowledge of the target's orientation structure). At all but very low densities, the variance model accounts well for human data. Next, both models' estimates of tolerable orientation variance are shown to be broadly consistent with human discrimination of texture from noise. However, neither model can account for subjects' lower tolerance to noise for translational patterns than other (e.g. rotational) patterns. Finally, to investigate how well these structural measures preserve local orientation discontinuities, I show that the presence of a patch of unstructured dots embedded in a Glass pattern produces a change in multi-local orientation variance that is sufficient to account for human detection (Hel Or and Zucker, 1989). Together, these data suggest that simple orientation statistics could drive a range of 'texture tasks', although the dependency of noise resistance on the pattern type (rotation, translation, etc.) remains to be accounted for.