Science.gov

Sample records for advanced variance reduction

  1. MCNP variance reduction overview

    SciTech Connect

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code.

  2. Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP

    SciTech Connect

    Edward W. Larsen

    2008-06-01

    The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due

  3. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    SciTech Connect

    Chiba, G. Tsuji, M.; Narabayashi, T.

    2015-01-15

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  4. A multicomb variance reduction scheme for Monte Carlo semiconductor simulators

    SciTech Connect

    Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.

    1998-04-01

    The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.

  5. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  6. Variance reduction methods applied to deep-penetration problems

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.

  7. Monte Carlo calculation of specific absorbed fractions: variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.

    2015-04-01

    The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.

  8. Fringe biasing: A variance reduction technique for optically thick meshes

    SciTech Connect

    Smedley-Stevenson, R. P.

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  9. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    SciTech Connect

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  10. A comparison of variance reduction techniques for radar simulation

    NASA Astrophysics Data System (ADS)

    Divito, A.; Galati, G.; Iovino, D.

    Importance sampling and extreme value technique (EVT) and its generalization (G-EVT) were compared as to reduction of the variance of radar simulation estimates. Importance sampling has a greater potential for including a priori information in the simulation experiment, and subsequently to reduce the estimation errors. This feature is paid for by a lack of generality of the simulation procedure. The EVT technique is only valid when a probability tail should be estimated (false alarm problems) and requires, as the only a priori information, that the considered variate belongs to the exponential class. The G-EVT introducing a shape parameter to be estimated (when unknown), allows smaller estimation error to be attained than EVT. The G-EVT and, to a greater extent, the EVT, lead to a straightforward and general simulation procedure for probability tails estimations.

  11. Improving computational efficiency of Monte Carlo simulations with variance reduction

    SciTech Connect

    Turner, A.

    2013-07-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  12. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  13. Variance reduction in Monte Carlo analysis of rarefied gas diffusion

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.

  14. Irreversible Langevin samplers and variance reduction: a large deviations approach

    NASA Astrophysics Data System (ADS)

    Rey-Bellet, Luc; Spiliopoulos, Konstantinos

    2015-07-01

    In order to sample from a given target distribution (often of Gibbs type), the Monte Carlo Markov chain method consists of constructing an ergodic Markov process whose invariant measure is the target distribution. By sampling the Markov process one can then compute, approximately, expectations of observables with respect to the target distribution. Often the Markov processes used in practice are time-reversible (i.e. they satisfy detailed balance), but our main goal here is to assess and quantify how the addition of a non-reversible part to the process can be used to improve the sampling properties. We focus on the diffusion setting (overdamped Langevin equations) where the drift consists of a gradient vector field as well as another drift which breaks the reversibility of the process but is chosen to preserve the Gibbs measure. In this paper we use the large deviation rate function for the empirical measure as a tool to analyze the speed of convergence to the invariant measure. We show that the addition of an irreversible drift leads to a larger rate function and it strictly improves the speed of convergence of ergodic average for (generic smooth) observables. We also deduce from this result that the asymptotic variance decreases under the addition of the irreversible drift and we give an explicit characterization of the observables whose variance is not reduced reduced, in terms of a nonlinear Poisson equation. Our theoretical results are illustrated and supplemented by numerical simulations.

  15. Application of variance reduction techniques in Monte Carlo simulation of clinical electron linear accelerator

    NASA Astrophysics Data System (ADS)

    Zoubair, M.; El Bardouni, T.; El Gonnouni, L.; Boulaich, Y.; El Bakkari, B.; El Younoussi, C.

    2012-01-01

    Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.

  16. Verification of the history-score moment equations for weight-window variance reduction

    SciTech Connect

    Solomon, Clell J; Sood, Avneet; Booth, Thomas E; Shultis, J. Kenneth

    2010-12-06

    The history-score moment equations that describe the moments of a Monte Carlo score distribution have been extended to weight-window variance reduction, The resulting equations have been solved deterministically to calculate the population variance of the Monte Carlo score distribution for a single tally, Results for one- and two-dimensional one-group problems are presented that predict the population variances to less than 1% deviation from the Monte Carlo for one-dimensional problems and between 1- 2% for two-dimensional problems,

  17. Optimization under uncertainty: Adaptive variance reduction, adaptive metamodeling, and investigation of robustness measures

    NASA Astrophysics Data System (ADS)

    Medina, Juan Camilo

    This dissertation offers computational and theoretical advances for optimization under uncertainty problems that utilize a probabilistic framework for addressing such uncertainties, and adopt a probabilistic performance as objective function. Emphasis is placed on applications that involve potentially complex numerical and probability models. A generalized approach is adopted, treating the system model as a "black-box" and relying on stochastic simulation for evaluating the probabilistic performance. This approach can impose, though, an elevated computational cost, and two of the advances offered in this dissertation aim at decreasing the computational burden associated with stochastic simulation when integrated with optimization applications. The first one develops an adaptive implementation of importance sampling (a popular variance reduction technique) by sharing information across the iterations of the numerical optimization algorithm. The system model evaluations from the current iteration are utilized to formulate importance sampling densities for subsequent iterations with only a small additional computational effort. The characteristics of these densities as well as the specific model parameters these densities span are explicitly optimized. The second advancement focuses on adaptive tuning of a kriging metamodel to replace the computationally intensive system model. A novel implementation is considered, establishing a metamodel with respect to both the uncertain model parameters as well as the design variables, offering significant computational savings. Additionally, the adaptive selection of certain characteristics of the metamodel, such as support points or order of basis functions, is considered by utilizing readily available information from the previous iteration of the optimization algorithm. The third advancement extends to a different application and considers the assessment of the appropriateness of different candidate robust designs. A novel

  18. Variance reduction techniques for estimation of integrals over a set of branching trajectories

    NASA Astrophysics Data System (ADS)

    Tsvetkov, E. A.

    2014-02-01

    Monte Carlo variance reduction techniques within the supertrack approach are justified as applied to estimating non-Boltzmann tallies equal to the mean of a random variable defined on the set of all branching trajectories. For this purpose, a probability space is constructed on the set of all branching trajectories, and the unbiasedness of this method is proved by averaging over all trajectories. Variance reduction techniques, such as importance sampling, splitting, and Russian roulette, are discussed. A method is described for extending available codes based on the von Neumann-Ulam scheme in order to cover the supertrack approach.

  19. Reduction of variance in measurements of average metabolite concentration in anatomically-defined brain regions

    NASA Astrophysics Data System (ADS)

    Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki

    2016-11-01

    Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.

  20. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    SciTech Connect

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.

  1. Discrete velocity computations with stochastic variance reduction of the Boltzmann equation for gas mixtures

    SciTech Connect

    Clarke, Peter; Varghese, Philip; Goldstein, David

    2014-12-09

    We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.

  2. A local'' exponential transform method for global variance reduction in Monte Carlo transport problems

    SciTech Connect

    Baker, R.S. ); Larsen, E.W. . Dept. of Nuclear Engineering)

    1992-01-01

    Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation.

  3. A ``local`` exponential transform method for global variance reduction in Monte Carlo transport problems

    SciTech Connect

    Baker, R.S.; Larsen, E.W.

    1992-08-01

    Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ``Local`` Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ``local`` biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation.

  4. ADVANTG 3.0.1: AutomateD VAriaNce reducTion Generator

    2015-08-17

    Version 00 ADVANTG is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 V1.60 (CCC-810, not included in this distribution) based on approximate 3-D multigroup discrete ordinates adjoint transport solutions generated by Denovo (included in this distribution). The variance reduction parameters generated by ADVANTG consist of space and energy-dependent weight-window bounds and biased source distributions, which are output in formats that can be directly used with unmodified versionsmore » of MCNP5. ADVANTG has been applied to neutron, photon, and coupled neutron-photon simulations of real-world radiation detection and shielding scenarios. ADVANTG is compatible with all MCNP5 geometry features and can be used to accelerate cell tallies (F4, F6, F8), surface tallies (F1 and F2), point-detector tallies (F5), and Cartesian mesh tallies (FMESH).« less

  5. ADVANTG 3.0.1: AutomateD VAriaNce reducTion Generator

    SciTech Connect

    2015-08-17

    Version 00 ADVANTG is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 V1.60 (CCC-810, not included in this distribution) based on approximate 3-D multigroup discrete ordinates adjoint transport solutions generated by Denovo (included in this distribution). The variance reduction parameters generated by ADVANTG consist of space and energy-dependent weight-window bounds and biased source distributions, which are output in formats that can be directly used with unmodified versions of MCNP5. ADVANTG has been applied to neutron, photon, and coupled neutron-photon simulations of real-world radiation detection and shielding scenarios. ADVANTG is compatible with all MCNP5 geometry features and can be used to accelerate cell tallies (F4, F6, F8), surface tallies (F1 and F2), point-detector tallies (F5), and Cartesian mesh tallies (FMESH).

  6. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    SciTech Connect

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C; Murphy, Brian D; Mueller, Don

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.

  7. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    SciTech Connect

    Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.

    2015-09-15

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  8. Importance sampling variance reduction for the Fokker-Planck rarefied gas particle method

    NASA Astrophysics Data System (ADS)

    Collyer, B. S.; Connaughton, C.; Lockerby, D. A.

    2016-11-01

    The Fokker-Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find that our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.

  9. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    SciTech Connect

    Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick

    2015-08-15

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.

  10. Variance reduction for Fokker-Planck based particle Monte Carlo schemes

    NASA Astrophysics Data System (ADS)

    Gorji, M. Hossein; Andric, Nemanja; Jenny, Patrick

    2015-08-01

    Recently, Fokker-Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1-3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker-Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker-Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.

  11. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2007-09-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the "hot" regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  12. MCNPX--PoliMi Variance Reduction Techniques for Simulating Neutron Scintillation Detector Response

    NASA Astrophysics Data System (ADS)

    Prasad, Shikha

    Scintillation detectors have emerged as a viable He-3 replacement technology in the field of nuclear nonproliferation and safeguards. The scintillation light produced in the detectors is dependent on the energy deposited and the nucleus with which the interaction occurs. For neutrons interacting with hydrogen in organic liquid scintillation detectors, the energy-to-light conversion process is nonlinear. MCNPX-PoliMi is a Monte Carlo Code that has been used for simulating this detailed scintillation physics; however, until now, simulations have only been done in analog mode. Analog Monte Carlo simulations can take long times to run, especially in the presence of shielding and large source-detector distances, as in the case of typical nonproliferation problems. In this thesis, two nonanalog approaches to speed up MCNPX-PoliMi simulations of neutron scintillation detector response have been studied. In the first approach, a response matrix method (RMM) is used to efficiently calculate neutron pulse height distributions (PHDs). This method combines the neutron current incident on the detector face with an MCNPX-PoliMi-calculated response matrix to generate PHDs. The PHD calculations and their associated uncertainty are compared for a polyethylene-shielded and lead-shielded Cf-252 source for three different techniques: fully analog MCNPX-PoliMi, the RMM, and the RMM with source biasing. The RMM with source biasing reduces computation time or increases the figure-of-merit on an average by a factor of 600 for polyethylene and 300 for lead shielding (when compared to the fully analog calculation). The simulated neutron PHDs show good agreement with the laboratory measurements, thereby validating the RMM. In the second approach, MCNPX-PoliMi simulations are performed with the aid of variance reduction techniques. This is done by separating the analog and nonanalog components of the simulations. Inside the detector region, where scintillation light is produced, no variance

  13. Variance reduction techniques for fast Monte Carlo CBCT scatter correction calculations

    NASA Astrophysics Data System (ADS)

    Mainegra-Hing, Ernesto; Kawrakow, Iwan

    2010-08-01

    Several variance reduction techniques improving the efficiency of the Monte Carlo estimation of the scatter contribution to a cone beam computed tomography (CBCT) scan were implemented in {\\tt egs\\_cbct}, an EGSnrc-based application for CBCT-related calculations. The largest impact on the efficiency comes from the splitting + Russian Roulette techniques which are described in detail. The fixed splitting technique is outperformed by both the position-dependent importance splitting (PDIS) and the region-dependent importance splitting (RDIS). The superiority of PDIS over RDIS observed for a water phantom with bone inserts is not observed when applying these techniques to a more realistic human chest phantom. A maximum efficiency improvement of several orders of magnitude over an analog calculation is obtained. A scatter calculation combining the reported efficiency gain with a smoothing algorithm is already in the proximity of being of practical use if a medium size computer cluster is available.

  14. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs

    NASA Astrophysics Data System (ADS)

    Rodriguez, M.; Sempau, J.; Brualla, L.

    2012-05-01

    A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.

  15. Attention-Induced Variance and Noise Correlation Reduction in Macaque V1 Is Mediated by NMDA Receptors

    PubMed Central

    Herrero, Jose L.; Gieselmann, Marc A.; Sanayei, Mehdi; Thiele, Alexander

    2013-01-01

    Summary Attention improves perception by affecting different aspects of the neuronal code. It enhances firing rates, it reduces firing rate variability and noise correlations of neurons, and it alters the strength of oscillatory activity. Attention-induced rate enhancement in striate cortex requires cholinergic mechanisms. The neuropharmacological mechanisms responsible for attention-induced variance and noise correlation reduction or those supporting changes in oscillatory activity are unknown. We show that ionotropic glutamatergic receptor activation is required for attention-induced rate variance, noise correlation, and LFP gamma power reduction in macaque V1, but not for attention-induced rate modulations. NMDA receptors mediate attention-induced variance reduction and attention-induced noise correlation reduction. Our results demonstrate that attention improves sensory processing by a variety of mechanisms that are dissociable at the receptor level. PMID:23719166

  16. Investigation of variance reduction techniques for Monte Carlo photon dose calculation using XVMC

    NASA Astrophysics Data System (ADS)

    Kawrakow, Iwan; Fippel, Matthias

    2000-08-01

    Several variance reduction techniques, such as photon splitting, electron history repetition, Russian roulette and the use of quasi-random numbers are investigated and shown to significantly improve the efficiency of the recently developed XVMC Monte Carlo code for photon beams in radiation therapy. It is demonstrated that it is possible to further improve the efficiency by optimizing transport parameters such as electron energy cut-off, maximum electron energy step size, photon energy cut-off and a cut-off for kerma approximation, without loss of calculation accuracy. These methods increase the efficiency by a factor of up to 10 compared with the initial XVMC ray-tracing technique or a factor of 50 to 80 compared with EGS4/PRESTA. Therefore, a common treatment plan (6 MV photons, 10×10 cm2 field size, 5 mm voxel resolution, 1% statistical uncertainty) can be calculated within 7 min using a single CPU 500 MHz personal computer. If the requirement on the statistical uncertainty is relaxed to 2%, the calculation time will be less than 2 min. In addition, a technique is presented which allows for the quantitative comparison of Monte Carlo calculated dose distributions and the separation of systematic and statistical errors. Employing this technique it is shown that XVMC calculations agree with EGSnrc on a sub-per cent level for simulations in the energy and material range of interest for radiation therapy.

  17. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems

    SciTech Connect

    Somasundaram, E.; Palmer, T. S.

    2013-07-01

    In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)

  18. Hybrid mesh generation using advancing reduction technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study presents an extension of the application of the advancing reduction technique to the hybrid mesh generation. The proposed algorithm is based on a pre-generated rectangle mesh (RM) with a certain orientation. The intersection points between the two sets of perpendicular mesh lines in RM an...

  19. Variance reduction through robust design of boundary conditions for stochastic hyperbolic systems of equations

    SciTech Connect

    Nordström, Jan Wahlsten, Markus

    2015-02-01

    We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for the Euler equations.

  20. An automated variance reduction method for global Monte Carlo neutral particle transport problems

    NASA Astrophysics Data System (ADS)

    Cooper, Marc Andrew

    A method to automatically reduce the variance in global neutral particle Monte Carlo problems by using a weight window derived from a deterministic forward solution is presented. This method reduces a global measure of the variance of desired tallies and increases its associated figure of merit. Global deep penetration neutron transport problems present difficulties for analog Monte Carlo. When the scalar flux decreases by many orders of magnitude, so does the number of Monte Carlo particles. This can result in large statistical errors. In conjunction with survival biasing, a weight window is employed which uses splitting and Russian roulette to restrict the symbolic weights of Monte Carlo particles. By establishing a connection between the scalar flux and the weight window, two important concepts are demonstrated. First, such a weight window can be constructed from a deterministic solution of a forward transport problem. Also, the weight window will distribute Monte Carlo particles in such a way to minimize a measure of the global variance. For Implicit Monte Carlo solutions of radiative transfer problems, an inefficient distribution of Monte Carlo particles can result in large statistical errors in front of the Marshak wave and at its leading edge. Again, the global Monte Carlo method is used, which employs a time-dependent weight window derived from a forward deterministic solution. Here, the algorithm is modified to enhance the number of Monte Carlo particles in the wavefront. Simulations show that use of this time-dependent weight window significantly improves the Monte Carlo calculation.

  1. Advanced CO2 Removal and Reduction System

    NASA Technical Reports Server (NTRS)

    Alptekin, Gokhan; Dubovik, Margarita; Copeland, Robert J.

    2011-01-01

    An advanced system for removing CO2 and H2O from cabin air, reducing the CO2, and returning the resulting O2 to the air is less massive than is a prior system that includes two assemblies . one for removal and one for reduction. Also, in this system, unlike in the prior system, there is no need to compress and temporarily store CO2. In this present system, removal and reduction take place within a single assembly, wherein removal is effected by use of an alkali sorbent and reduction is effected using a supply of H2 and Ru catalyst, by means of the Sabatier reaction, which is CO2 + 4H2 CH4 + O2. The assembly contains two fixed-bed reactors operating in alternation: At first, air is blown through the first bed, which absorbs CO2 and H2O. Once the first bed is saturated with CO2 and H2O, the flow of air is diverted through the second bed and the first bed is regenerated by supplying it with H2 for the Sabatier reaction. Initially, the H2 is heated to provide heat for the regeneration reaction, which is endothermic. In the later stages of regeneration, the Sabatier reaction, which is exothermic, supplies the heat for regeneration.

  2. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    SciTech Connect

    Milias-Argeitis, Andreas Khammash, Mustafa; Lygeros, John

    2014-07-14

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  3. VR-BFDT: A variance reduction based binary fuzzy decision tree induction method for protein function prediction.

    PubMed

    Golzari, Fahimeh; Jalili, Saeed

    2015-07-21

    In protein function prediction (PFP) problem, the goal is to predict function of numerous well-sequenced known proteins whose function is not still known precisely. PFP is one of the special and complex problems in machine learning domain in which a protein (regarded as instance) may have more than one function simultaneously. Furthermore, the functions (regarded as classes) are dependent and also are organized in a hierarchical structure in the form of a tree or directed acyclic graph. One of the common learning methods proposed for solving this problem is decision trees in which, by partitioning data into sharp boundaries sets, small changes in the attribute values of a new instance may cause incorrect change in predicted label of the instance and finally misclassification. In this paper, a Variance Reduction based Binary Fuzzy Decision Tree (VR-BFDT) algorithm is proposed to predict functions of the proteins. This algorithm just fuzzifies the decision boundaries instead of converting the numeric attributes into fuzzy linguistic terms. It has the ability of assigning multiple functions to each protein simultaneously and preserves the hierarchy consistency between functional classes. It uses the label variance reduction as splitting criterion to select the best "attribute-value" at each node of the decision tree. The experimental results show that the overall performance of the proposed algorithm is promising.

  4. Fast patient-specific Monte Carlo brachytherapy dose calculations via the correlated sampling variance reduction technique

    PubMed Central

    Sampson, Andrew; Le, Yi; Williamson, Jeffrey F.

    2012-01-01

    . On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 × 1 × 1 mm3) and breast (0.67 × 0.67 × 0.8 mm3) CTVs, respectively. Conclusions: CMC supports an additional average 38–60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study. PMID:22320816

  5. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model.

    PubMed

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com.

  6. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators of use in cancer therapy

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2010-01-01

    The Monte Carlo simulation of clinical electron linear accelerators requires large computation times to achieve the level of uncertainty required for radiotherapy. In this context, variance reduction techniques play a fundamental role in the reduction of this computational time. Here we describe the use of the ant colony method to control the application of two variance reduction techniques: Splitting and Russian roulette. The approach can be applied to any accelerator in a straightforward way and permits the increasing of the efficiency of the simulation by a factor larger than 50.

  7. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations

    SciTech Connect

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2014-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.

  8. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

  9. Evaluation of the Advanced Subsonic Technology Program Noise Reduction Benefits

    NASA Technical Reports Server (NTRS)

    Golub, Robert A.; Rawls, John W., Jr.; Russell, James W.

    2005-01-01

    This report presents a detailed evaluation of the aircraft noise reduction technology concepts developed during the course of the NASA/FAA Advanced Subsonic Technology (AST) Noise Reduction Program. In 1992, NASA and the FAA initiated a cosponsored, multi-year program with the U.S. aircraft industry focused on achieving significant advances in aircraft noise reduction. The program achieved success through a systematic development and validation of noise reduction technology. Using the NASA Aircraft Noise Prediction Program, the noise reduction benefit of the technologies that reached a NASA technology readiness level of 5 or 6 were applied to each of four classes of aircraft which included a large four engine aircraft, a large twin engine aircraft, a small twin engine aircraft and a business jet. Total aircraft noise reductions resulting from the implementation of the appropriate technologies for each class of aircraft are presented and compared to the AST program goals.

  10. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  11. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  12. Oxidation-Reduction Resistance of Advanced Copper Alloys

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. (Technical Monitor); Thomas-Ogbuji, L.; Humphrey, D. L.; Setlock, J. A.

    2003-01-01

    Resistance to oxidation and blanching is a key issue for advanced copper alloys under development for NASA's next generation of reusable launch vehicles. Candidate alloys, including dispersion-strengthened Cu-Cr-Nb, solution-strengthened Cu-Ag-Zr, and ODS Cu-Al2O3, are being evaluated for oxidation resistance by static TGA exposures in low-p(O2) and cyclic oxidation in air, and by cyclic oxidation-reduction exposures (using air for oxidation and CO/CO2 or H2/Ar for reduction) to simulate expected service environments. The test protocol and results are presented.

  13. 20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION FACILITY USED TO CUT PLUTONIUM CONTAMINATED GLOVE BOXES AND MISCELLANEOUS LARGE EQUIPMENT DOWN TO AN EASILY PACKAGED SIZE FOR DISPOSAL. ROUTINE OPERATIONS WERE PERFORMED REMOTELY, USING HOISTS, MANIPULATOR ARMS, AND GLOVE PORTS TO REDUCE BOTH INTENSITY AND TIME OF RADIATION EXPOSURE TO THE OPERATOR. (11/6/86) - Rocky Flats Plant, Plutonium Fabrication, Central section of Plant, Golden, Jefferson County, CO

  14. Three Averaging Techniques for Reduction of Antenna Temperature Variance Measured by a Dicke Mode, C-Band Radiometer

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Lawrence, Roland W.

    2000-01-01

    As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.

  15. Fluid Mechanics, Drag Reduction and Advanced Configuration Aeronautics

    NASA Technical Reports Server (NTRS)

    Bushnell, Dennis M.

    2000-01-01

    This paper discusses Advanced Aircraft configurational approaches across the speed range, which are either enabled, or greatly enhanced, by clever Flow Control. Configurations considered include Channel Wings with circulation control for VTOL (but non-hovering) operation with high cruise speed, strut-braced CTOL transports with wingtip engines and extensive ('natural') laminar flow control, a midwing double fuselage CTOL approach utilizing several synergistic methods for drag-due-to-lift reduction, a supersonic strut-braced configuration with order of twice the L/D of current approaches and a very advanced, highly engine flow-path-integrated hypersonic cruise machine. This paper indicates both the promise of synergistic flow control approaches as enablers for 'Revolutions' in aircraft performance and fluid mechanic 'areas of ignorance' which impede their realization and provide 'target-rich' opportunities for Fluids Research.

  16. Recent advances in the kinetics of oxygen reduction

    SciTech Connect

    Adzic, R.

    1996-07-01

    Oxygen reduction is considered an important electrocatalytic reaction; the most notable need remains improvement of the catalytic activity of existing metal electrocatalysts and development of new ones. A review is given of new advances in the understanding of reaction kinetics and improvements of the electrocatalytic properties of some surfaces, with focus on recent studies of relationship of the surface properties to its activity and reaction kinetics. The urgent need is to improve catalytic activity of Pt and synthesize new, possibly non- noble metal catalysts. New experimental techniques for obtaining new level of information include various {ital in situ} spectroscopies and scanning probes, some involving synchrotron radiation. 138 refs, 18 figs, 2 tabs.

  17. Lung volume reduction for advanced emphysema: surgical and bronchoscopic approaches.

    PubMed

    Tidwell, Sherry L; Westfall, Elizabeth; Dransfield, Mark T

    2012-01-01

    Chronic obstructive pulmonary disease is the third leading cause of death in the United States, affecting more than 24 million people. Inhaled bronchodilators are the mainstay of therapy; they improve symptoms and quality of life and reduce exacerbations. These and smoking cessation and long-term oxygen therapy for hypoxemic patients are the only medical treatments definitively demonstrated to reduce mortality. Surgical approaches include lung transplantation and lung volume reduction and the latter has been shown to improve exercise tolerance, quality of life, and survival in highly selected patients with advanced emphysema. Lung volume reduction surgery results in clinical benefits. The procedure is associated with a short-term risk of mortality and a more significant risk of cardiac and pulmonary perioperative complications. Interest has been growing in the use of noninvasive, bronchoscopic methods to address the pathological hyperinflation that drives the dyspnea and exercise intolerance that is characteristic of emphysema. In this review, the mechanism by which lung volume reduction improves pulmonary function is outlined, along with the risks and benefits of the traditional surgical approach. In addition, the emerging bronchoscopic techniques for lung volume reduction are introduced and recent clinical trials examining their efficacy are summarized. PMID:22189668

  18. Potential for Landing Gear Noise Reduction on Advanced Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Nickol, Craig L.; Burley, Casey L.; Guo, Yueping

    2016-01-01

    The potential of significantly reducing aircraft landing gear noise is explored for aircraft configurations with engines installed above the wings or the fuselage. An innovative concept is studied that does not alter the main gear assembly itself but does shorten the main strut and integrates the gear in pods whose interior surfaces are treated with acoustic liner. The concept is meant to achieve maximum noise reduction so that main landing gears can be eliminated as a major source of airframe noise. By applying this concept to an aircraft configuration with 2025 entry-into-service technology levels, it is shown that compared to noise levels of current technology, the main gear noise can be reduced by 10 EPNL dB, bringing the main gear noise close to a floor established by other components such as the nose gear. The assessment of the noise reduction potential accounts for design features for the advanced aircraft configuration and includes the effects of local flow velocity in and around the pods, gear noise reflection from the airframe, and reflection and attenuation from acoustic liner treatment on pod surfaces and doors. A technical roadmap for maturing this concept is discussed, and the possible drag increase at cruise due to the addition of the pods is identified as a challenge, which needs to be quantified and minimized possibly with the combination of detailed design and application of drag reduction technologies.

  19. Advancing Development and Greenhouse Gas Reductions in Vietnam's Wind Sector

    SciTech Connect

    Bilello, D.; Katz, J.; Esterly, S.; Ogonowski, M.

    2014-09-01

    Clean energy development is a key component of Vietnam's Green Growth Strategy, which establishes a target to reduce greenhouse gas (GHG) emissions from domestic energy activities by 20-30 percent by 2030 relative to a business-as-usual scenario. Vietnam has significant wind energy resources, which, if developed, could help the country reach this target while providing ancillary economic, social, and environmental benefits. Given Vietnam's ambitious clean energy goals and the relatively nascent state of wind energy development in the country, this paper seeks to fulfill two primary objectives: to distill timely and useful information to provincial-level planners, analysts, and project developers as they evaluate opportunities to develop local wind resources; and, to provide insights to policymakers on how coordinated efforts may help advance large-scale wind development, deliver near-term GHG emission reductions, and promote national objectives in the context of a low emission development framework.

  20. Virus Reduction during Advanced Bardenpho and Conventional Wastewater Treatment Processes.

    PubMed

    Schmitz, Bradley W; Kitajima, Masaaki; Campillo, Maria E; Gerba, Charles P; Pepper, Ian L

    2016-09-01

    The present study investigated wastewater treatment for the removal of 11 different virus types (pepper mild mottle virus; Aichi virus; genogroup I, II, and IV noroviruses; enterovirus; sapovirus; group-A rotavirus; adenovirus; and JC and BK polyomaviruses) by two wastewater treatment facilities utilizing advanced Bardenpho technology and compared the results with conventional treatment processes. To our knowledge, this is the first study comparing full-scale treatment processes that all received sewage influent from the same region. The incidence of viruses in wastewater was assessed with respect to absolute abundance, occurrence, and reduction in monthly samples collected throughout a 12 month period in southern Arizona. Samples were concentrated via an electronegative filter method and quantified using TaqMan-based quantitative polymerase chain reaction (qPCR). Results suggest that Plant D, utilizing an advanced Bardenpho process as secondary treatment, effectively reduced pathogenic viruses better than facilities using conventional processes. However, the absence of cell-culture assays did not allow an accurate assessment of infective viruses. On the basis of these data, the Aichi virus is suggested as a conservative viral marker for adequate wastewater treatment, as it most often showed the best correlation coefficients to viral pathogens, was always detected at higher concentrations, and may overestimate the potential virus risk. PMID:27447291

  1. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  2. Low cost biological lung volume reduction therapy for advanced emphysema

    PubMed Central

    Bakeer, Mostafa; Abdelgawad, Taha Taha; El-Metwaly, Raed; El-Morsi, Ahmed; El-Badrawy, Mohammad Khairy; El-Sharawy, Solafa

    2016-01-01

    Background Bronchoscopic lung volume reduction (BLVR), using biological agents, is one of the new alternatives to lung volume reduction surgery. Objectives To evaluate efficacy and safety of biological BLVR using low cost agents including autologous blood and fibrin glue. Methods Enrolled patients were divided into two groups: group A (seven patients) in which autologous blood was used and group B (eight patients) in which fibrin glue was used. The agents were injected through a triple lumen balloon catheter via fiberoptic bronchoscope. Changes in high resolution computerized tomography (HRCT) volumetry, pulmonary function tests, symptoms, and exercise capacity were evaluated at 12 weeks postprocedure as well as for complications. Results In group A, at 12 weeks postprocedure, there was significant improvement in the mean value of HRCT volumetry and residual volume/total lung capacity (% predicted) (P-value: <0.001 and 0.038, respectively). In group B, there was significant improvement in the mean value of HRCT volumetry and (residual volume/total lung capacity % predicted) (P-value: 0.005 and 0.004, respectively). All patients tolerated the procedure with no mortality. Conclusion BLVR using autologous blood and locally prepared fibrin glue is a promising method for therapy of advanced emphysema in term of efficacy, safety as well as cost effectiveness. PMID:27536091

  3. Advances in volcano monitoring and risk reduction in Latin America

    NASA Astrophysics Data System (ADS)

    McCausland, W. A.; White, R. A.; Lockhart, A. B.; Marso, J. N.; Assitance Program, V. D.; Volcano Observatories, L. A.

    2014-12-01

    We describe results of cooperative work that advanced volcanic monitoring and risk reduction. The USGS-USAID Volcano Disaster Assistance Program (VDAP) was initiated in 1986 after disastrous lahars during the 1985 eruption of Nevado del Ruiz dramatizedthe need to advance international capabilities in volcanic monitoring, eruption forecasting and hazard communication. For the past 28 years, VDAP has worked with our partners to improve observatories, strengthen monitoring networks, and train observatory personnel. We highlight a few of the many accomplishments by Latin American volcano observatories. Advances in monitoring, assessment and communication, and lessons learned from the lahars of the 1985 Nevado del Ruiz eruption and the 1994 Paez earthquake enabled the Servicio Geológico Colombiano to issue timely, life-saving warnings for 3 large syn-eruptive lahars at Nevado del Huila in 2007 and 2008. In Chile, the 2008 eruption of Chaitén prompted SERNAGEOMIN to complete a national volcanic vulnerability assessment that led to a major increase in volcano monitoring. Throughout Latin America improved seismic networks now telemeter data to observatories where the decades-long background rates and types of seismicity have been characterized at over 50 volcanoes. Standardization of the Earthworm data acquisition system has enabled data sharing across international boundaries, of paramount importance during both regional tectonic earthquakes and during volcanic crises when vulnerabilities cross international borders. Sharing of seismic forecasting methods led to the formation of the international organization of Latin American Volcano Seismologists (LAVAS). LAVAS courses and other VDAP training sessions have led to international sharing of methods to forecast eruptions through recognition of precursors and to reduce vulnerabilities from all volcano hazards (flows, falls, surges, gas) through hazard assessment, mapping and modeling. Satellite remote sensing data

  4. Advanced Reduction Processes: A New Class of Treatment Processes

    PubMed Central

    Vellanki, Bhanu Prakash; Batchelor, Bill; Abdel-Wahab, Ahmed

    2013-01-01

    Abstract A new class of treatment processes called advanced reduction processes (ARPs) is proposed. ARPs combine activation methods and reducing agents to form highly reactive reducing radicals that degrade oxidized contaminants. Batch screening experiments were conducted to identify effective ARPs by applying several combinations of activation methods (ultraviolet light, ultrasound, electron beam, and microwaves) and reducing agents (dithionite, sulfite, ferrous iron, and sulfide) to degradation of four target contaminants (perchlorate, nitrate, perfluorooctanoic acid, and 2,4 dichlorophenol) at three pH-levels (2.4, 7.0, and 11.2). These experiments identified the combination of sulfite activated by ultraviolet light produced by a low-pressure mercury vapor lamp (UV-L) as an effective ARP. More detailed kinetic experiments were conducted with nitrate and perchlorate as target compounds, and nitrate was found to degrade more rapidly than perchlorate. Effectiveness of the UV-L/sulfite treatment process improved with increasing pH for both perchlorate and nitrate. We present the theory behind ARPs, identify potential ARPs, demonstrate their effectiveness against a wide range of contaminants, and provide basic experimental evidence in support of the fundamental hypothesis for ARP, namely, that activation methods can be applied to reductants to form reducing radicals that degrade oxidized contaminants. This article provides an introduction to ARPs along with sufficient data to identify potentially effective ARPs and the target compounds these ARPs will be most effective in destroying. Further research will provide a detailed analysis of degradation kinetics and the mechanisms of contaminant destruction in an ARP. PMID:23840160

  5. Mindfulness-Based Stress Reduction in Advanced Nursing Practice

    PubMed Central

    Williams, Hants; Simmons, Leigh Ann; Tanabe, Paula

    2015-01-01

    The aim of this article is to discuss how advanced practice nurses (APNs) can incorporate mindfulness-based stress reduction (MBSR) as a nonpharmacologic clinical tool in their practice. Over the last 30 years, patients and providers have increasingly used complementary and holistic therapies for the nonpharmacologic management of acute and chronic diseases. Mindfulness-based interventions, specifically MBSR, have been tested and applied within a variety of patient populations. There is strong evidence to support that the use of MBSR can improve a range of biological and psychological outcomes in a variety of medical illnesses, including acute and chronic pain, hypertension, and disease prevention. This article will review the many ways APNs can incorporate MBSR approaches for health promotion and disease/symptom management into their practice. We conclude with a discussion of how nurses can obtain training and certification in MBSR. Given the significant and growing literature supporting the use of MBSR in the prevention and treatment of chronic disease, increased attention on how APNs can incorporate MBSR into clinical practice is necessary. PMID:25673578

  6. Space Launch System NASA Research Announcement Advanced Booster Engineering Demonstration and/or Risk Reduction

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Craig, Kellie D.

    2011-01-01

    The intent of the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort is to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Key Concepts (1) Offerors must propose an Advanced Booster concept that meets SLS Program requirements (2) Engineering Demonstration and/or Risk Reduction must relate to the Offeror s Advanced Booster concept (3) NASA Research Announcement (NRA) will not be prescriptive in defining Engineering Demonstration and/or Risk Reduction

  7. Advanced Fluid Research On Drag reduction In Turbulence Experiments -AFRODITE-

    NASA Astrophysics Data System (ADS)

    Fransson, J. H. M.; Fallenius, B. E. G.; Shahinfar, S.; Sattarzadeh, S. S.; Talamelli, A.

    2011-12-01

    A hot topic in today's debate on global warming is drag reduction in aeronautics. The most beneficial concept for drag reduction is to maintain the major portion of the airfoil laminar. Estimations show that the potential drag reduction can be as much as 15%, which would give a significant reduction of NOx and CO emissions in the atmosphere considering that the number of aircraft take offs, only in the EU, is over 19 million per year. An important element for successful flow control, which can lead to a reduced aerodynamic drag, is enhanced physical understanding of the transition to turbulence process.

  8. Advanced Fluid Research On Drag reduction In Turbulence Experiments -- AFRODITE

    NASA Astrophysics Data System (ADS)

    Fransson, Jens H. M.

    2011-11-01

    A hot topic in today's debate on global warming is drag reduction in aeronautics. The most beneficial concept for drag reduction is to maintain the major portion of the airfoil laminar. Estimations show that the potential drag reduction can be as much as 15%, which would give a significant reduction of NOx and CO emissions in the atmosphere considering that the number of aircraft take offs, only in the EU, is over 19 million per year. In previous tuned wind tunnel measurements it has been shown that roughness elements can be used to sensibly delay transition to turbulence. Fransson et al. 2006 Phys. Rev. Lett. 96, 064501. The result is revolutionary, since the common belief has been that surface roughness causes earlier transition and in turn increases the drag, and is a proof of concept of the passive control method per se. The beauty with a passive control technique is that no external energy has to be added to the flow system in order to perform the control, instead one uses the existing energy in the flow. Within the research programme AFRODITE, funded by ERC, we will take this passive control method to the next level by making it twofold, more persistent and more robust. Financial support from the European Research Council (ERC) is acknowledged.

  9. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  10. Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells

    NASA Astrophysics Data System (ADS)

    Ali, Mohamed Mahmoud; Kvande, Halvor

    2016-06-01

    ABSTRACT There are two mainpreheating methods that are used nowadays for aluminum reduction cells. One is based on electrical resistance preheating with a thin bed of small coke and/or graphite particles between the anodes and the cathode carbon blocks. The other is flame preheating, where two or more gas or oil burners are used. Electrical resistance preheating is the oldest method, but is still frequently used by different aluminum producers. Many improvements have been made to this method by different companies over the last decade. In this paper, important points pertaining to the preparation and preheating of these cells, as well as measurements made during the preheating process and evaluation of the performance of the preheating, are illustrated. The preheating times of these cells were found to be between 36 h and 96 h for cell currents between 176 kA and 406 kA, while the resistance bed thickness was between 13 mm and 60 mm. The average cathode surface temperature at the end of the preheating was usually between 800°C and 950°C. The effect of the preheating methods on cell life is unclear and no quantifiable conclusions can be drawn. Some works carried out in the mathematical modeling area are also discussed. It is concluded that there is a need for more studies with real situations for preheated cells on the basis of actual measurements. The expected development in electrical resistance preheating of aluminum reduction cells is also summarized.

  11. An advanced carbon reactor subsystem for carbon dioxide reduction

    NASA Technical Reports Server (NTRS)

    Noyes, Gary P.; Cusick, Robert J.

    1986-01-01

    An evaluation is presented of the development status of an advanced carbon-reactor subsystem (ACRS) for the production of water and dense, solid carbon from CO2 and hydrogen, as required in physiochemical air revitalization systems for long-duration manned space missions. The ACRS consists of a Sabatier Methanation Reactor (SMR) that reduces CO2 with hydrogen to form methane and water, a gas-liquid separator to remove product water from the methane, and a Carbon Formation Reactor (CFR) to pyrolize methane to carbon and hydrogen; the carbon is recycled to the SMR, while the produce carbon is periodically removed from the CFR. A preprototype ACRS under development for the NASA Space Station is described.

  12. Development of an advanced Sabatier CO2 reduction subsystem

    NASA Technical Reports Server (NTRS)

    Kleiner, G. N.; Cusick, R. J.

    1981-01-01

    A preprototype Sabatier CO2 reduction subsystem was successfully designed, fabricated and tested. The lightweight, quick starting (less than 5 minutes) reactor utlizes a highly active and physically durable methanation catalyst composed of ruthenium on alumina. The use of this improved catalyst permits a simple, passively controlled reactor design with an average lean component H2/CO2 conversion efficiency of over 99% over a range of H2/CO2 molar ratios of 1.8 to 5 while operating with process flows equivalent to a crew size of up to five persons. The subsystem requires no heater operation after start-up even during simulated 55 minute lightside/39 minute darkside orbital operation.

  13. Lung volume reduction therapies for advanced emphysema: an update.

    PubMed

    Berger, Robert L; Decamp, Malcolm M; Criner, Gerard J; Celli, Bartolome R

    2010-08-01

    Observational and randomized studies provide convincing evidence that lung volume reduction surgery (LVRS) improves symptoms, lung function, exercise tolerance, and life span in well-defined subsets of patients with emphysema. Yet, in the face of an estimated 3 million patients with emphysema in the United States, < 15 LVRS operations are performed monthly under the aegis of Medicare, in part because of misleading reporting in lay and medical publications suggesting that the operation is associated with prohibitive risks and offers minimal benefits. Thus, a treatment with proven potential for palliating and prolonging life may be underutilized. In an attempt to lower risks and cost, several bronchoscopic strategies (bronchoscopic emphysema treatment [BET]) to reduce lung volume have been introduced. The following three methods have been tested in some depth: (1) unidirectional valves that allow exit but bar entry of gas to collapse targeted hyperinflated portions of the lung and reduce overall volume; (2) biologic lung volume reduction (BioLVR) that involves intrabronchial administration of a biocompatible complex to collapse, inflame, scar, and shrink the targeted emphysematous lung; and (3) airway bypass tract (ABT) or creation of stented nonanatomic pathways between hyperinflated pulmonary parenchyma and bronchial tree to decompress and reduce the volume of oversized lung. The results of pilot and randomized pivotal clinical trials suggest that the bronchoscopic strategies are associated with lower mortality and morbidity but are also less efficient than LVRS. Most bronchoscopic approaches improve quality-of-life measures without supportive physiologic or exercise tolerance benefits. Although there is promise of limited therapeutic influence, the available information is not sufficient to recommend use of bronchoscopic strategies for treating emphysema. PMID:20682529

  14. Advanced Exploration Systems (AES) Logistics Reduction and Repurposing Project: Advanced Clothing Ground Study Final Report

    NASA Technical Reports Server (NTRS)

    Byrne, Vicky; Orndoff, Evelyne; Poritz, Darwin; Schlesinger, Thilini

    2013-01-01

    All human space missions require significant logistical mass and volume that will become an excessive burden for long duration missions beyond low Earth orbit. The goal of the Advanced Exploration Systems (AES) Logistics Reduction & Repurposing (LRR) project is to bring new ideas and technologies that will enable human presence in farther regions of space. The LRR project has five tasks: 1) Advanced Clothing System (ACS) to reduce clothing mass and volume, 2) Logistics to Living (L2L) to repurpose existing cargo, 3) Heat Melt Compactor (HMC) to reprocess materials in space, 4) Trash to Gas (TTG) to extract useful gases from trash, and 5) Systems Engineering and Integration (SE&I) to integrate these logistical components. The current International Space Station (ISS) crew wardrobe has already evolved not only to reduce some of the logistical burden but also to address crew preference. The ACS task is to find ways to further reduce this logistical burden while examining human response to different types of clothes. The ACS task has been broken into a series of studies on length of wear of various garments: 1) three small studies conducted through other NASA projects (MMSEV, DSH, HI-SEAS) focusing on length of wear of garments treated with an antimicrobial finish; 2) a ground study, which is the subject of this report, addressing both length of wear and subject perception of various types of garments worn during aerobic exercise; and 3) an ISS study replicating the ground study, and including every day clothing to collect information on perception in reduced gravity in which humans experience physiological changes. The goal of the ground study is first to measure how long people can wear the same exercise garment, depending on the type of fabric and the presence of antimicrobial treatment, and second to learn why. Human factors considerations included in the study consist of the Institutional Review Board approval, test protocol and participants' training, and a web

  15. Experiment and mechanism investigation on advanced reburning for NOx reduction: influence of CO and temperature

    PubMed Central

    Wang, Zhi-hua; Zhou, Jun-hu; Zhang, Yan-wei; Lu, Zhi-min; Fan, Jian-ren; Cen, Ke-fa

    2005-01-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 °C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 °C~1100 °C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503

  16. Reporting explained variance

    NASA Astrophysics Data System (ADS)

    Good, Ron; Fletcher, Harold J.

    The importance of reporting explained variance (sometimes referred to as magnitude of effects) in ANOVA designs is discussed in this paper. Explained variance is an estimate of the strength of the relationship between treatment (or other factors such as sex, grade level, etc.) and dependent variables of interest to the researcher(s). Three methods that can be used to obtain estimates of explained variance in ANOVA designs are described and applied to 16 studies that were reported in recent volumes of this journal. The results show that, while in most studies the treatment accounts for a relatively small proportion of the variance in dependent variable scores., in., some studies the magnitude of the treatment effect is respectable. The authors recommend that researchers in science education report explained variance in addition to the commonly reported tests of significance, since the latter are inadequate as the sole basis for making decisions about the practical importance of factors of interest to science education researchers.

  17. Advanced Risk Reduction Tool (ARRT) Special Case Study Report: Science and Engineering Technical Assessments (SETA) Program

    NASA Technical Reports Server (NTRS)

    Kirsch, Paul J.; Hayes, Jane; Zelinski, Lillian

    2000-01-01

    This special case study report presents the Science and Engineering Technical Assessments (SETA) team's findings for exploring the correlation between the underlying models of Advanced Risk Reduction Tool (ARRT) relative to how it identifies, estimates, and integrates Independent Verification & Validation (IV&V) activities. The special case study was conducted under the provisions of SETA Contract Task Order (CTO) 15 and the approved technical approach documented in the CTO-15 Modification #1 Task Project Plan.

  18. NASA's Space Launch System Advanced Booster Engineering Demonstration and/or Risk Reduction Efforts

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Dumbacher, Daniel L.; May, Todd A.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters (SRBs) for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, with a stated intent to reduce risks leading to an affordable advanced booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the advanced boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the advanced boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit (BEO), opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable advanced booster that meets the SLS performance requirements

  19. NASA's Space Launch System Advanced Booster Engineering Demonstration and Risk Reduction Efforts

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; May, Todd; Dumbacher, Daniel

    2012-01-01

    The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, and its stated intent was to reduce risks leading to an affordable Advanced Booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the Advanced Boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the Advanced Boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit, opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable Advanced Booster that meets the SLS performance requirements. Demonstrations and

  20. A COSMIC VARIANCE COOKBOOK

    SciTech Connect

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu

    2011-04-20

    Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z

  1. A Cosmic Variance Cookbook

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter

    2011-04-01

    Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is

  2. Chemical oxygen demand reduction in coffee wastewater through chemical flocculation and advanced oxidation processes.

    PubMed

    Zayas Pérez, Teresa; Geissler, Gunther; Hernandez, Fernando

    2007-01-01

    The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculation and advanced oxidation processes (AOP) had been studied. The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H2O2, UV/O3 and UV/H2O2/O3 processes was determined under acidic conditions. For each of these processes, different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater. Coffee wastewater is characterized by a high chemical oxygen demand (COD) and low total suspended solids. The outcomes of coffee wastewater treatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD, color, and turbidity. It was found that a reduction in COD of 67% could be realized when the coffee wastewater was treated by chemical coagulation-flocculation with lime and coagulant T-1. When coffee wastewater was treated by coagulation-flocculation in combination with UV/H2O2, a COD reduction of 86% was achieved, although only after prolonged UV irradiation. Of the three advanced oxidation processes considered, UV/H2O2, UV/O3 and UV/H2O2/O3, we found that the treatment with UV/H2O2/O3 was the most effective, with an efficiency of color, turbidity and further COD removal of 87%, when applied to the flocculated coffee wastewater.

  3. Impacts of natural organic matter on perchlorate removal by an advanced reduction process.

    PubMed

    Duan, Yuhang; Batchelor, Bill

    2014-01-01

    Perchlorate can be destroyed by Advanced Reduction Processes (ARPs) that combine chemical reductants (e.g., sulfite) with activating methods (e.g., UV light) in order to produce highly reactive reducing free radicals that are capable of rapid and effective perchlorate reduction. However, natural organic matter (NOM) exists widely in the environment and has the potential to influence perchlorate reduction by ARPs that use UV light as the activating method. Batch experiments were conducted to obtain data on the impacts of NOM and wavelength of light on destruction of perchlorate by the ARPs that use sulfite activated by UV light produced by low-pressure mercury lamps (UV-L) or by KrCl excimer lamps (UV-KrCl). The results indicate that NOM strongly inhibits perchlorate removal by both ARP, because it competes with sulfite for UV light. Even though the absorbance of sulfite is much higher at 222 nm than that at 254 nm, the results indicate that a smaller amount of perchlorate was removed with the UV-KrCl lamp (222 nm) than with the UV-L lamp (254 nm). The results of this study will help to develop the proper way to apply the ARPs as practical water treatment processes. PMID:24521418

  4. The quantum Allan variance

    NASA Astrophysics Data System (ADS)

    Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał

    2016-08-01

    The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.

  5. Variance Anisotropy in Kinetic Plasmas

    NASA Astrophysics Data System (ADS)

    Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping

    2016-06-01

    Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.

  6. Recent Advances in Inorganic Heterogeneous Electrocatalysts for Reduction of Carbon Dioxide.

    PubMed

    Zhu, Dong Dong; Liu, Jin Long; Qiao, Shi Zhang

    2016-05-01

    In view of the climate changes caused by the continuously rising levels of atmospheric CO2 , advanced technologies associated with CO2 conversion are highly desirable. In recent decades, electrochemical reduction of CO2 has been extensively studied since it can reduce CO2 to value-added chemicals and fuels. Considering the sluggish reaction kinetics of the CO2 molecule, efficient and robust electrocatalysts are required to promote this conversion reaction. Here, recent progress and opportunities in inorganic heterogeneous electrocatalysts for CO2 reduction are discussed, from the viewpoint of both experimental and computational aspects. Based on elemental composition, the inorganic catalysts presented here are classified into four groups: metals, transition-metal oxides, transition-metal chalcogenides, and carbon-based materials. However, despite encouraging accomplishments made in this area, substantial advances in CO2 electrolysis are still needed to meet the criteria for practical applications. Therefore, in the last part, several promising strategies, including surface engineering, chemical modification, nanostructured catalysts, and composite materials, are proposed to facilitate the future development of CO2 electroreduction. PMID:26996295

  7. Conversations across Meaning Variance

    ERIC Educational Resources Information Center

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  8. Naive Analysis of Variance

    ERIC Educational Resources Information Center

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  9. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  10. Tumor Volume Reduction Rate After Preoperative Chemoradiotherapy as a Prognostic Factor in Locally Advanced Rectal Cancer

    SciTech Connect

    Yeo, Seung-Gu; Kim, Dae Yong; Park, Ji Won; Oh, Jae Hwan; Kim, Sun Young; Chang, Hee Jin; Kim, Tae Hyun; Kim, Byung Chang; Sohn, Dae Kyung; Kim, Min Ju

    2012-02-01

    Purpose: To investigate the prognostic significance of tumor volume reduction rate (TVRR) after preoperative chemoradiotherapy (CRT) in locally advanced rectal cancer (LARC). Methods and Materials: In total, 430 primary LARC (cT3-4) patients who were treated with preoperative CRT and curative radical surgery between May 2002 and March 2008 were analyzed retrospectively. Pre- and post-CRT tumor volumes were measured using three-dimensional region-of-interest MR volumetry. Tumor volume reduction rate was determined using the equation TVRR (%) = (pre-CRT tumor volume - post-CRT tumor volume) Multiplication-Sign 100/pre-CRT tumor volume. The median follow-up period was 64 months (range, 27-99 months) for survivors. Endpoints were disease-free survival (DFS) and overall survival (OS). Results: The median TVRR was 70.2% (mean, 64.7% {+-} 22.6%; range, 0-100%). Downstaging (ypT0-2N0M0) occurred in 183 patients (42.6%). The 5-year DFS and OS rates were 77.7% and 86.3%, respectively. In the analysis that included pre-CRT and post-CRT tumor volumes and TVRR as continuous variables, only TVRR was an independent prognostic factor. Tumor volume reduction rate was categorized according to a cutoff value of 45% and included with clinicopathologic factors in the multivariate analysis; ypN status, circumferential resection margin, and TVRR were significant prognostic factors for both DFS and OS. Conclusions: Tumor volume reduction rate was a significant prognostic factor in LARC patients receiving preoperative CRT. Tumor volume reduction rate data may be useful for tailoring surgery and postoperative adjuvant therapy after preoperative CRT.

  11. Risk reduction activities for an F-1-based advanced booster for NASA's Space Launch System

    NASA Astrophysics Data System (ADS)

    Crocker, A. M.; Doering, K. B.; Cook, S. A.; Meadows, R. G.; Lariviere, B. W.; Bachtel, F. D.

    For NASA's Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) procurement, Dynetics, Inc. and Pratt & Whitney Rocketdyne (PWR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's goal of enabling competition on an affordable booster that meets the evolved capabilities of the SLS. During the ABEDRR effort, the Dynetics Team will apply state-of-the-art manufacturing and processing techniques to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. ABEDRR will use NASA test facilities to perform full-scale F-1 gas generator and powerpack hot-fire test campaigns for engine risk reduction. Dynetics will also fabricate and test a tank assembly to verify the structural design. The Dynetics Team is partnered with NASA through Space Act Agreements (SAAs) to maximize the expertise and capabilities applied to ABEDRR.

  12. Removal of PCBs in contaminated soils by means of chemical reduction and advanced oxidation processes.

    PubMed

    Rybnikova, V; Usman, M; Hanna, K

    2016-09-01

    Although the chemical reduction and advanced oxidation processes have been widely used individually, very few studies have assessed the combined reduction/oxidation approach for soil remediation. In the present study, experiments were performed in spiked sand and historically contaminated soil by using four synthetic nanoparticles (Fe(0), Fe/Ni, Fe3O4, Fe3 - x Ni x O4). These nanoparticles were tested firstly for reductive transformation of polychlorinated biphenyls (PCBs) and then employed as catalysts to promote chemical oxidation reactions (H2O2 or persulfate). Obtained results indicated that bimetallic nanoparticles Fe/Ni showed the highest efficiency in reduction of PCB28 and PCB118 in spiked sand (97 and 79 %, respectively), whereas magnetite (Fe3O4) exhibited a high catalytic stability during the combined reduction/oxidation approach. In chemical oxidation, persulfate showed higher PCB degradation extent than hydrogen peroxide. As expected, the degradation efficiency was found to be limited in historically contaminated soil, where only Fe(0) and Fe/Ni particles exhibited reductive capability towards PCBs (13 and 18 %). In oxidation step, the highest degradation extents were obtained in presence of Fe(0) and Fe/Ni (18-19 %). The increase in particle and oxidant doses improved the efficiency of treatment, but overall degradation extents did not exceed 30 %, suggesting that only a small part of PCBs in soil was available for reaction with catalyst and/or oxidant. The use of organic solvent or cyclodextrin to improve the PCB availability in soil did not enhance degradation efficiency, underscoring the strong impact of soil matrix. Moreover, a better PCB degradation was observed in sand spiked with extractable organic matter separated from contaminated soil. In contrast to fractions with higher particle size (250-500 and <500 μm), no PCB degradation was observed in the finest fraction (≤250 μm) having higher organic matter content. These findings

  13. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.

    2015-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the

  14. Simultaneous nitrate reduction and acetaminophen oxidation using the continuous-flow chemical-less VUV process as an integrated advanced oxidation and reduction process.

    PubMed

    Moussavi, Gholamreza; Shekoohiyan, Sakine

    2016-11-15

    This work was aimed at investigating the performance of the continuous-flow VUV photoreactor as a novel chemical-less advanced process for simultaneously oxidizing acetaminophen (ACT) as a model of pharmaceuticals and reducing nitrate in a single reactor. Solution pH was an important parameter affecting the performance of VUV; the highest ACT oxidation and nitrate reduction attained at solution pH between 6 and 8. The ACT was oxidized mainly by HO while the aqueous electrons were the main working agents in the reduction of nitrate. The performance of VUV photoreactor improved with the increase of hydraulic retention time (HRT); the complete degradation of ACT and ∼99% reduction of nitrate with 100% N2 selectivity achieved at HRT of 80min. The VUV effluent concentrations of nitrite and ammonium at HRT of 80min were below the drinking water standards. The real water sample contaminated with the ACT and nitrate was efficiently treated in the VUV photoreactor. Therefore, the VUV photoreactor is a chemical-less advanced process in which both advanced oxidation and advanced reduction reactions are accomplished. This unique feature possesses VUV photoreactor as a promising method of treating water contaminated with both pharmaceutical and nitrate. PMID:27434736

  15. Simultaneous nitrate reduction and acetaminophen oxidation using the continuous-flow chemical-less VUV process as an integrated advanced oxidation and reduction process.

    PubMed

    Moussavi, Gholamreza; Shekoohiyan, Sakine

    2016-11-15

    This work was aimed at investigating the performance of the continuous-flow VUV photoreactor as a novel chemical-less advanced process for simultaneously oxidizing acetaminophen (ACT) as a model of pharmaceuticals and reducing nitrate in a single reactor. Solution pH was an important parameter affecting the performance of VUV; the highest ACT oxidation and nitrate reduction attained at solution pH between 6 and 8. The ACT was oxidized mainly by HO while the aqueous electrons were the main working agents in the reduction of nitrate. The performance of VUV photoreactor improved with the increase of hydraulic retention time (HRT); the complete degradation of ACT and ∼99% reduction of nitrate with 100% N2 selectivity achieved at HRT of 80min. The VUV effluent concentrations of nitrite and ammonium at HRT of 80min were below the drinking water standards. The real water sample contaminated with the ACT and nitrate was efficiently treated in the VUV photoreactor. Therefore, the VUV photoreactor is a chemical-less advanced process in which both advanced oxidation and advanced reduction reactions are accomplished. This unique feature possesses VUV photoreactor as a promising method of treating water contaminated with both pharmaceutical and nitrate.

  16. Nuclear Material Variance Calculation

    1995-01-01

    MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less

  17. DEMONSTRATION OF AN ADVANCED INTEGRATED CONTROL SYSTEM FOR SIMULTANEOUS EMISSIONS REDUCTION

    SciTech Connect

    Suzanne Shea; Randhir Sehgal; Ilga Celmins; Andrew Maxson

    2002-02-01

    The primary objective of the project titled ''Demonstration of an Advanced Integrated Control System for Simultaneous Emissions Reduction'' was to demonstrate at proof-of-concept scale the use of an online software package, the ''Plant Environmental and Cost Optimization System'' (PECOS), to optimize the operation of coal-fired power plants by economically controlling all emissions simultaneously. It combines physical models, neural networks, and fuzzy logic control to provide both optimal least-cost boiler setpoints to the boiler operators in the control room, as well as optimal coal blending recommendations designed to reduce fuel costs and fuel-related derates. The goal of the project was to demonstrate that use of PECOS would enable coal-fired power plants to make more economic use of U.S. coals while reducing emissions.

  18. Noise Reduction Potential of Large, Over-the-Wing Mounted, Advanced Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2000-01-01

    As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Indeed, the noise goal for NASA's Aeronautics Enterprise calls for technologies that will help to provide a 20 EPNdB reduction relative to today's levels by the year 2022. Further, the large fan diameters of modem, increasingly higher bypass ratio engines pose a significant packaging and aircraft installation challenge. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large, ultra high bypass ratio cycles to continue, this over-the-wing design is believed to offer noise shielding benefits to observers on the ground. This paper describes the analytical certification noise predictions of a notional, long haul, commercial quadjet transport with advanced, high bypass engines mounted above the wing.

  19. Advanced subsonic Technology Noise Reduction Element Separate Flow Nozzle Tests for Engine Noise Reduction Sub-Element

    NASA Technical Reports Server (NTRS)

    Saiyed, Naseem H.

    2000-01-01

    Contents of this presentation include: Advanced Subsonic Technology (AST) goals and general information; Nozzle nomenclature; Nozzle schematics; Photograph of all baselines; Configurations tests and types of data acquired; and Engine cycle and plug geometry impact on EPNL.

  20. Nominal analysis of "variance".

    PubMed

    Weiss, David J

    2009-08-01

    Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.

  1. Aeronautical fuel conservation possibilities for advanced subsonic transports. [application of aeronautical technology for drag and weight reduction

    NASA Technical Reports Server (NTRS)

    Braslow, A. L.; Whitehead, A. H., Jr.

    1973-01-01

    The anticipated growth of air transportation is in danger of being constrained by increased prices and insecure sources of petroleum-based fuel. Fuel-conservation possibilities attainable through the application of advances in aeronautical technology to aircraft design are identified with the intent of stimulating NASA R and T and systems-study activities in the various disciplinary areas. The material includes drag reduction; weight reduction; increased efficiency of main and auxiliary power systems; unconventional air transport of cargo; and operational changes.

  2. Reduction of antibiotic resistance genes in municipal wastewater effluent by advanced oxidation processes.

    PubMed

    Zhang, Yingying; Zhuang, Yao; Geng, Jinju; Ren, Hongqiang; Xu, Ke; Ding, Lili

    2016-04-15

    This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe(2+)/H2O2) and UV/H2O2 process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H2O2 process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H2O2 method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe(2+)/H2O2 had a molar ratio of 0.1 and a H2O2 concentration of 0.01molL(-1) with a pH of 3.0 and reaction time of 2h, 2.58-3.79 logs of target genes were removed. Under the initial effluent pH condition (pH=7.0), the removal was 2.26-3.35 logs. For the UV/H2O2 process, when the pH was 3.5 with a H2O2 concentration of 0.01molL(-1) accompanied by 30min of UV irradiation, all ARGs could achieve a reduction of 2.8-3.5 logs, and 1.55-2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H2O2 process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe(2+)/H2O2 molar ratios, H2O2 concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs.

  3. Conceptual design study of advanced acoustic composite nacelle. [for achieving reductions in community noise and operating expense

    NASA Technical Reports Server (NTRS)

    Goodall, R. G.; Painter, G. W.

    1975-01-01

    Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.

  4. Mechanisms of advanced oxidation processing on bentonite consumption reduction in foundry.

    PubMed

    Wang, Yujue; Cannon, Fred S; Komarneni, Sridhar; Voigt, Robert C; Furness, J C

    2005-10-01

    Prior full-scale foundry data have shown that when an advanced oxidation (AO) process is employed in a green sand system, the foundry needs 20-35% less makeup bentonite clay than when AO is not employed. We herein sought to explore the mechanism of this enhancement and found that AO water displaced the carbon coating of pyrolyzed carbonaceous condensates that otherwise accumulated on the bentonite surface. This was discerned by surface elemental analysis. This AO treatment restored the clay's capacity to adsorb methylene blue (as a measure of its surface charge) and water vapor (as a reflection of its hydrophilic character). In full-scale foundries, these parameters have been tied to improved green compressive strength and mold performance. When baghouse dust from a full-scale foundry received ultrasonic treatment in the lab, 25-30% of the dust classified into the clay-size fraction, whereas only 7% classified this way without ultrasonics. Also, the ultrasonication caused a size reduction of the bentonite due to the delamination of bentonite particles. The average bentonite particle diameter decreased from 4.6 to 3 microm, while the light-scattering surface area increased over 50% after 20 min ultrasonication. This would greatly improve the bonding efficiency of the bentonite according to the classical clay bonding mechanism. As a combined result of these mechanisms, the reduced bentonite consumption in full-scale foundries could be accounted for. PMID:16245849

  5. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  6. Noise-Reduction Benefits Analyzed for Over-the-Wing-Mounted Advanced Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2000-01-01

    As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Also, the large fan diameters of modern engines with increasingly higher bypass ratios pose significant packaging and aircraft installation challenges. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large diameters and high bypass ratio cycles to continue, this approach allows the wing to shield much of the engine noise from people on the ground. The Propulsion Systems Analysis Office at the NASA Glenn Research Center at Lewis Field conducted independent analytical research to estimate the noise reduction potential of mounting advanced turbofan engines above the wing. Certification noise predictions were made for a notional long-haul commercial quadjet transport. A large quad was chosen because, even under current regulations, such aircraft sometimes experience difficulty in complying with certification noise requirements with a substantial margin. Also, because of its long wing chords, a large airplane would receive the greatest advantage of any noise-shielding benefit.

  7. Cosmology without cosmic variance

    DOE PAGES

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less

  8. Cosmology without cosmic variance

    SciTech Connect

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.

  9. Applying Ecological Theory to Advance the Science and Practice of School-Based Prejudice Reduction Interventions

    ERIC Educational Resources Information Center

    McKown, Clark

    2005-01-01

    Several school-based racial prejudice-reduction interventions have demonstrated some benefit. Ecological theory serves as a framework within which to understand the limits and to enhance the efficacy of prejudice-reduction interventions. Using ecological theory, this article examines three prejudice-reduction approaches, including social cognitive…

  10. Experiment and mechanism investigation on advanced reburning for NO(x) reduction: influence of CO and temperature.

    PubMed

    Wang, Zhi-Hua; Zhou, Jun-Hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-Ren; Cen, Ke-Fa

    2005-03-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15% approximately 25% reburn heat input, temperature range from 1100 degrees C to 1400 degrees C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 degrees C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 degrees C approximately 1100 degrees C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NO(x) Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures.

  11. Experiment and mechanism investigation on advanced reburning for NO(x) reduction: influence of CO and temperature.

    PubMed

    Wang, Zhi-Hua; Zhou, Jun-Hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-Ren; Cen, Ke-Fa

    2005-03-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15% approximately 25% reburn heat input, temperature range from 1100 degrees C to 1400 degrees C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 degrees C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 degrees C approximately 1100 degrees C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NO(x) Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503

  12. Budget variance analysis using RVUs.

    PubMed

    Berlin, M F; Budzynski, M R

    1998-01-01

    This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247

  13. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  14. Advances in potassium catalyzed NOx reduction by carbon materials: An overview

    SciTech Connect

    Bueno-Lopez, A.; Garcia-Garcia, A.; Illan-Gomez, M.J.; Linares-Solano, A.; de Lecea, C.S.M.

    2007-06-15

    The research work conducted in our group concerning the study of the potassium-catalyzed NOx reduction by carbon materials is presented. The importance of the different variables affecting the NOx-carbon reactions is discussed, e.g. carbon porosity, coal rank, potassium loading, influence of the binder used, and effect of the gas composition. The catalyst loading is the main feature affecting the selectivity for NOx reduction against O{sub 2} combustion. The NOx reduction without important combustion in O{sub 2} occurs between 350 and 475{sup o}C in the presence of the catalyst. The presence of H{sub 2}O in the gas mixture enhances NOx reduction at low carbon conversions, but as the reaction proceeds, it decreases as the selectivity does. The presence of CO{sub 2} diminishes the activity and selectivity of the catalyst. SO{sub 2} completely inhibits the catalytic activity of potassium due to sulfate formation.

  15. UV-Visible Spectrooelectrochemistry of the Reduction Products of Anthraquinone in Dimethylformamide Solutions: An Advanced Undergraduate Experiment

    NASA Astrophysics Data System (ADS)

    Babaei, Ali; Connor, Paul A.; McQuillan, A. James; Umapathy, Siva

    1997-10-01

    The redox properties of anthraquinone (AQ) may be used to model the behaviour of quinones in biological systems. AQ undergoes two successive one-electron reductions in aprotic solvents to form a stable radical anion (AQ.-) and a stable dianion (AQ2-) but this behaviour is altered in the presence of a proton donor. This advanced undergraduate experiment shows how cyclic voltammetry, digital simulations of cyclic voltammograms, and UV-visible spectroelectrochemistry may be used to examine the reduction behaviour of AQ in dimethylformamide (DMF), both in the absence and presence of benzoic acid. The cyclic voltammetry of AQ in DMF shows two reversible one-electron reductions. This allows the UV-visible spectra of AQ.- and of AQ2- to be determined using an optically transparent thin layer electrode (OTTLE) cell. AQH- may also be detected in the spectra if there are proton impurities. When benzoic acid is added to the DMF, the cyclic voltammograms are markedly altered with almost all the reduction occurring near the AQ/AQ.- potential and the corresponding oxidation at rather more positive potentials. The UV-visible spectroelectrochemistry shows AQH2 as the stable reduction product under these conditions while digital simulations of the cyclic voltammograms support a mechanism involving protonation of AQ.- followed by AQH. disproportionation.

  16. Advanced experimental analysis of controls on microbial Fe(III) oxide reduction. First year progress report

    SciTech Connect

    Roden, E.E.; Urrutia, M.M.

    1997-07-01

    'The authors have made considerable progress toward a number of project objectives during the first several months of activity on the project. An exhaustive analysis was made of the growth rate and biomass yield (both derived from measurements of cell protein production) of two representative strains of Fe(III)-reducing bacteria (Shewanellaalga strain BrY and Geobactermetallireducens) growing with different forms of Fe(III) as an electron acceptor. These two fundamentally different types of Fe(III)-reducing bacteria (FeRB) showed comparable rates of Fe(III) reduction, cell growth, and biomass yield during reduction of soluble Fe(III)-citrate and solid-phase amorphous hydrous ferric oxide (HFO). Intrinsic growth rates of the two FeRB were strongly influenced by whether a soluble or a solid-phase source of Fe(III) was provided: growth rates on soluble Fe(III) were 10--20 times higher than those on solid-phase Fe(III) oxide. Intrinsic FeRB growth rates were comparable during reduction of HF0 and a synthetic crystalline Fe(III) oxide (goethite). A distinct lag phase for protein production was observed during the first several days of incubation in solid-phase Fe(III) oxide medium, even though Fe(III) reduction proceeded without any lag. No such lag between protein production and Fe(III) reduction was observed during growth with soluble Fe(III). This result suggested that protein synthesis coupled to solid-phase Fe(III) oxide reduction in batch culture requires an initial investment of energy (generated by Fe(III) reduction), which is probably needed for synthesis of materials (e.g. extracellular polysaccharides) required for attachment of the cells to oxide surfaces. This phenomenon may have important implications for modeling the growth of FeRB in subsurface sedimentary environments, where attachment and continued adhesion to solid-phase materials will be required for maintenance of Fe(III) reduction activity. Despite considerable differences in the rate and pattern

  17. Tungsten Contact and Line Resistance Reduction with Advanced Pulsed Nucleation Layer and Low Resistivity Tungsten Treatment

    NASA Astrophysics Data System (ADS)

    Chandrashekar, Anand; Chen, Feng; Lin, Jasmine; Humayun, Raashina; Wongsenakhum, Panya; Chang, Sean; Danek, Michal; Itou, Takamasa; Nakayama, Tomoo; Kariya, Atsushi; Kawaguchi, Masazumi; Hizume, Shunichi

    2010-09-01

    This paper describes electrical testing results of new tungsten chemical vapor deposition (CVD-W) process concepts that were developed to address the W contact and bitline scaling issues on 55 nm node devices. Contact resistance (Rc) measurements in complementary metal oxide semiconductor (CMOS) devices indicate that the new CVD-W process for sub-32 nm and beyond - consisting of an advanced pulsed nucleation layer (PNL) combined with low resistivity tungsten (LRW) initiation - produces a 20-30% drop in Rc for diffused NiSi contacts. From cross-sectional bright field and dark field transmission electron microscopy (TEM) analysis, such Rc improvement can be attributed to improved plugfill and larger in-feature W grain size with the advanced PNL+LRW process. More experiments that measured contact resistance for different feature sizes point to favorable Rc scaling with the advanced PNL+LRW process. Finally, 40% improvement in line resistance was observed with this process as tested on 55 nm embedded dynamic random access memory (DRAM) devices, confirming that the advanced PNL+LRW process can be an effective metallization solution for sub-32 nm devices.

  18. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  19. Stereospecific Reductions of Delta4-Cholesten-3-one: An Advanced Organic Synthesis Project.

    ERIC Educational Resources Information Center

    Markgraf, J. Hodge; And Others

    1988-01-01

    Outlines a multistep project involving oxidation of cholesterol, isomerization of an enone, and reduction of delta-4-cholesten-3-one. Featured is the last stage in which the ring junction is set stereospecifically. Recommends two laboratory periods to complete the reaction. (ML)

  20. NMR Studies of Structure-Reactivity Relationships in Carbonyl Reduction: A Collaborative Advanced Laboratory Experiment

    ERIC Educational Resources Information Center

    Marincean, Simona; Smith, Sheila R.; Fritz, Michael; Lee, Byung Joo; Rizk, Zeinab

    2012-01-01

    An upper-division laboratory project has been developed as a collaborative investigation of a reaction routinely taught in organic chemistry courses: the reduction of carbonyl compounds by borohydride reagents. Determination of several trends regarding structure-activity relationship was possible because each student contributed his or her results…

  1. Current advances of integrated processes combining chemical absorption and biological reduction for NO x removal from flue gas.

    PubMed

    Zhang, Shihan; Chen, Han; Xia, Yinfeng; Liu, Nan; Lu, Bi-Hong; Li, Wei

    2014-10-01

    Anthropogenic nitrogen oxides (NO x ) emitted from the fossil-fuel-fired power plants cause adverse environmental issues such as acid rain, urban ozone smoke, and photochemical smog. A novel chemical absorption-biological reduction (CABR) integrated process under development is regarded as a promising alternative to the conventional selective catalytic reduction processes for NO x removal from the flue gas because it is economic and environmentally friendly. CABR process employs ferrous ethylenediaminetetraacetate [Fe(II)EDTA] as a solvent to absorb the NO x following microbial denitrification of NO x to harmless nitrogen gas. Meanwhile, the absorbent Fe(II)EDTA is biologically regenerated to sustain the adequate NO x removal. Compared with conventional denitrification process, CABR not only enhances the mass transfer of NO from gas to liquid phase but also minimize the impact of oxygen on the microorganisms. This review provides the current advances of the development of the CABR process for NO x removal from the flue gas.

  2. Effect of Two Advanced Noise Reduction Technologies on the Aerodynamic Performance of an Ultra High Bypass Ratio Fan

    NASA Technical Reports Server (NTRS)

    Hughes, Christoper E.; Gazzaniga, John A.

    2013-01-01

    A wind tunnel experiment was conducted in the NASA Glenn Research Center anechoic 9- by 15-Foot Low-Speed Wind Tunnel to investigate two new advanced noise reduction technologies in support of the NASA Fundamental Aeronautics Program Subsonic Fixed Wing Project. The goal of the experiment was to demonstrate the noise reduction potential and effect on fan model performance of the two noise reduction technologies in a scale model Ultra-High Bypass turbofan at simulated takeoff and approach aircraft flight speeds. The two novel noise reduction technologies are called Over-the-Rotor acoustic treatment and Soft Vanes. Both technologies were aimed at modifying the local noise source mechanisms of the fan tip vortex/fan case interaction and the rotor wake-stator interaction. For the Over-the-Rotor acoustic treatment, two noise reduction configurations were investigated. The results showed that the two noise reduction technologies, Over-the-Rotor and Soft Vanes, were able to reduce the noise level of the fan model, but the Over-the-Rotor configurations had a significant negative impact on the fan aerodynamic performance; the loss in fan aerodynamic efficiency was between 2.75 to 8.75 percent, depending on configuration, compared to the conventional solid baseline fan case rubstrip also tested. Performance results with the Soft Vanes showed that there was no measurable change in the corrected fan thrust and a 1.8 percent loss in corrected stator vane thrust, which resulted in a total net thrust loss of approximately 0.5 percent compared with the baseline reference stator vane set.

  3. Energy Saving Melting and Revert Reduction Technology (Energy SMARRT): Manufacturing Advanced Engineered Components Using Lost Foam Casting Technology

    SciTech Connect

    Littleton, Harry; Griffin, John

    2011-07-31

    This project was a subtask of Energy Saving Melting and Revert Reduction Technology (Energy SMARRT) Program. Through this project, technologies, such as computer modeling, pattern quality control, casting quality control and marketing tools, were developed to advance the Lost Foam Casting process application and provide greater energy savings. These technologies have improved (1) production efficiency, (2) mechanical properties, and (3) marketability of lost foam castings. All three reduce energy consumption in the metals casting industry. This report summarizes the work done on all tasks in the period of January 1, 2004 through June 30, 2011. Current (2011) annual energy saving estimates based on commercial introduction in 2011 and a market penetration of 97% by 2020 is 5.02 trillion BTU's/year and 6.46 trillion BTU's/year with 100% market penetration by 2023. Along with these energy savings, reduction of scrap and improvement in casting yield will result in a reduction of the environmental emissions associated with the melting and pouring of the metal which will be saved as a result of this technology. The average annual estimate of CO2 reduction per year through 2020 is 0.03 Million Metric Tons of Carbon Equivalent (MM TCE).

  4. External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Niedra, Janis M.; Geng, Steven M.

    2013-01-01

    Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.

  5. ADVANCEMENT OF NUCLEIC ACID-BASED TOOLS FOR MONITORING IN SITU REDUCTIVE DECHLORINATION

    SciTech Connect

    Vangelas, K; ELIZABETH EDWARDS, E; FRANK LOFFLER, F; Brian02 Looney, B

    2006-11-17

    Regulatory protocols generally recognize that destructive processes are the most effective mechanisms that support natural attenuation of chlorinated solvents. In many cases, these destructive processes will be biological processes and, for chlorinated compounds, will often be reductive processes that occur under anaerobic conditions. The existing EPA guidance (EPA, 1998) provides a list of parameters that provide indirect evidence of reductive dechlorination processes. In an effort to gather direct evidence of these processes, scientists have identified key microorganisms and are currently developing tools to measure the abundance and activity of these organisms in subsurface systems. Drs. Edwards and Luffler are two recognized leaders in this field. The research described herein continues their development efforts to provide a suite of tools to enable direct measures of biological processes related to the reductive dechlorination of TCE and PCE. This study investigated the strengths and weaknesses of the 16S rRNA gene-based approach to characterizing the natural attenuation capabilities in samples. The results suggested that an approach based solely on 16S rRNA may not provide sufficient information to document the natural attenuation capabilities in a system because it does not distinguish between strains of organisms that have different biodegradation capabilities. The results of the investigations provided evidence that tools focusing on relevant enzymes for functionally desired characteristics may be useful adjuncts to the 16SrRNA methods.

  6. Advances in projection of climate change impacts using supervised nonlinear dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali

    2016-05-01

    One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.

  7. Armor Possibilities and Radiographic Blur Reduction for The Advanced Hydrotest Facility

    SciTech Connect

    Hackett, M

    2001-09-01

    Currently at Lawrence Livermore National Laboratory (LLNL) a composite firing vessel is under development for the Advanced Hydrotest Facility (AHF) to study high explosives. This vessel requires a shrapnel mitigating layer to protect the vessel during experiments. The primary purpose of this layer is to protect the vessel, yet the material must be transparent to proton radiographs. Presented here are methods available to collect data needed before selection, along with a comparison tool developed to aid in choosing a material that offers the best of ballistic protection while allowing for clear radiographs.

  8. Advanced reburning for reduction of NO sub x emissions in combustion systems

    SciTech Connect

    Seeker, W.R.; Chen, S.L.; Kramlich, J.C.

    1992-08-18

    This patent describes a process for reducing nitrogen oxides in combustion emission systems. It comprises mixing a reburning fuel with combustion emissions in a gaseous reburning zone such that the reburning zone is substantially oxygen deficient; passing the resulting mixture of reburning fuel and combustion emissions into a first burnout zone; introducing a first stream of burnout air into the first burnout zone; advancing the resulting mixture from the first burnout zone to a second burnout zone; and introducing a second stream of burnout air into the second burnout zone.

  9. Advances in earthquake and tsunami sciences and disaster risk reduction since the 2004 Indian ocean tsunami

    NASA Astrophysics Data System (ADS)

    Satake, Kenji

    2014-12-01

    The December 2004 Indian Ocean tsunami was the worst tsunami disaster in the world's history with more than 200,000 casualties. This disaster was attributed to giant size (magnitude M ~ 9, source length >1000 km) of the earthquake, lacks of expectation of such an earthquake, tsunami warning system, knowledge and preparedness for tsunamis in the Indian Ocean countries. In the last ten years, seismology and tsunami sciences as well as tsunami disaster risk reduction have significantly developed. Progress in seismology includes implementation of earthquake early warning, real-time estimation of earthquake source parameters and tsunami potential, paleoseismological studies on past earthquakes and tsunamis, studies of probable maximum size, recurrence variability, and long-term forecast of large earthquakes in subduction zones. Progress in tsunami science includes accurate modeling of tsunami source such as contribution of horizontal components or "tsunami earthquakes", development of new types of offshore and deep ocean tsunami observation systems such as GPS buoys or bottom pressure gauges, deployments of DART gauges in the Pacific and other oceans, improvements in tsunami propagation modeling, and real-time inversion or data assimilation for the tsunami warning. These developments have been utilized for tsunami disaster reduction in the forms of tsunami early warning systems, tsunami hazard maps, and probabilistic tsunami hazard assessments. Some of the above scientific developments helped to reveal the source characteristics of the 2011 Tohoku earthquake, which caused devastating tsunami damage in Japan and Fukushima Dai-ichi Nuclear Power Station accident. Toward tsunami disaster risk reduction, interdisciplinary and trans-disciplinary approaches are needed for scientists with other stakeholders.

  10. Advanced Glycation End Products in Foods and a Practical Guide to Their Reduction in the Diet

    PubMed Central

    URIBARRI, JAIME; WOODRUFF, SANDRA; GOODMAN, SUSAN; CAI, WEIJING; CHEN, XUE; PYZIK, RENATA; YONG, ANGIE; STRIKER, GARY E.; VLASSARA, HELEN

    2013-01-01

    Modern diets are largely heat-processed and as a result contain high levels of advanced glycation end products (AGEs). Dietary advanced glycation end products (dAGEs) are known to contribute to increased oxidant stress and inflammation, which are linked to the recent epidemics of diabetes and cardiovascular disease. This report significantly expands the available dAGE database, validates the dAGE testing methodology, compares cooking procedures and inhibitory agents on new dAGE formation, and introduces practical approaches for reducing dAGE consumption in daily life. Based on the findings, dry heat promotes new dAGE formation by >10- to 100-fold above the uncooked state across food categories. Animal-derived foods that are high in fat and protein are generally AGE-rich and prone to new AGE formation during cooking. In contrast, carbohydrate-rich foods such as vegetables, fruits, whole grains, and milk contain relatively few AGEs, even after cooking. The formation of new dAGEs during cooking was prevented by the AGE inhibitory compound aminoguanidine and significantly reduced by cooking with moist heat, using shorter cooking times, cooking at lower temperatures, and by use of acidic ingredients such as lemon juice or vinegar. The new dAGE database provides a valuable instrument for estimating dAGE intake and for guiding food choices to reduce dAGE intake. PMID:20497781

  11. The Variance Reaction Time Model

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2004-01-01

    The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…

  12. Analysis of Variance: Variably Complex

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  13. Spontaneous reduction of advanced twin embryos: its occurrence and clinical relevance in dairy cattle.

    PubMed

    López-Gatius, F; Hunter, R H F

    2005-01-01

    Twin pregnancies represent a management problem in dairy cattle since the risk of pregnancy loss increases, and the profitability of the herd diminishes drastically as the frequency of twin births increases. The aim of this study was to monitor the development of 211 twin pregnancies in high producing dairy cows in order to determine the best time for an embryo reduction approach. Pregnancy was diagnosed by transrectal ultrasonography between 36 and 42 days after insemination. Animals were then subjected to weekly ultrasound examination until Day 90 of gestation or until pregnancy loss. Viability was determined by monitoring the embryonic/fetal heartbeat until Day 50 of pregnancy, and then by heartbeat or fetal movement detection. Eighty-six cows (40.8%) bore bilateral and 125 (59.2%) unilateral twin pregnancies. Embryo death was registered in one of the two embryos in 35 cows (16.6%), 33 of them at pregnancy diagnosis. Pregnancy loss occurred in 22 of these cows between 1 and 4 weeks later. Thus, 13 (6.2% of the total animals) cows, carrying one dead of the two embryos, maintained gestation. Total pregnancy loss before Day 90 of pregnancy (mean 69 +/- 14 days) was registered in 51 (24.2%) cows: 7 (8%) of bilateral pregnancies and 44 (35.2%) of unilateral pregnancies, and it was higher (P = 0.0001) for both right (32.4%, 24/74) and left (39.2%, 20/51) unilateral than for bilateral (8.1%, 7/86) twin pregnancies. The single embryo death rate was significantly (P = 0.02) lower for cows with bilateral twins (9.3%, 8/86) than for total cows with unilateral twins (21.6%, 27/125). By way of overall conclusion, embryo reduction can occur in dairy cattle, and the practical perspective remains that most embryonic mortality in twins (one of the two embryos) occurs around Days 35-40 of gestation, the period when pregnancy diagnosis is generally performed and when embryo reduction could be tried.

  14. Aerodynamic performance investigation of advanced mechanical suppressor and ejector nozzle concepts for jet noise reduction

    NASA Technical Reports Server (NTRS)

    Wagenknecht, C. D.; Bediako, E. D.

    1985-01-01

    Advanced Supersonic Transport jet noise may be reduced to Federal Air Regulation limits if recommended refinements to a recently developed ejector shroud exhaust system are successfully carried out. A two-part program consisting of a design study and a subscale model wind tunnel test effort conducted to define an acoustically treated ejector shroud exhaust system for supersonic transport application is described. Coannular, 20-chute, and ejector shroud exhaust systems were evaluated. Program results were used in a mission analysis study to determine aircraft takeoff gross weight to perform a nominal design mission, under Federal Aviation Regulation (1969), Part 36, Stage 3 noise constraints. Mission trade study results confirmed that the ejector shroud was the best of the three exhaust systems studied with a significant takeoff gross weight advantage over the 20-chute suppressor nozzle which was the second best.

  15. Cobalt diselenide nanoparticles embedded within porous carbon polyhedra as advanced electrocatalyst for oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Wu, Renbing; Xue, Yanhong; Liu, Bo; Zhou, Kun; Wei, Jun; Chan, Siew Hwa

    2016-10-01

    Highly efficient and cost-effective electrocatalyst for the oxygen reduction reaction (ORR) is crucial for a variety of renewable energy applications. Herein, strongly coupled hybrid composites composed of cobalt diselenide (CoSe2) nanoparticles embedded within graphitic carbon polyhedra (GCP) as high-performance ORR catalyst have been rationally designed and synthesized. The catalyst is fabricated by a convenient method, which involves the simultaneous pyrolysis and selenization of preformed Co-based zeolitic imidazolate framework (ZIF-67). Benefiting from the unique structural features, the resulting CoSe2/GCP hybrid catalyst shows high stability and excellent electrocatalytic activity towards ORR (the onset and half-wave potentials are 0.935 and 0.806 V vs. RHE, respectively), which is superior to the state-of-the-art commercial Pt/C catalyst (0.912 and 0.781 V vs. RHE, respectively).

  16. Wind-tunnel studies of advanced cargo aircraft concepts. [leading edge vortex flaps for drag reduction

    NASA Technical Reports Server (NTRS)

    Rao, D. M.; Goglia, G. L.

    1981-01-01

    Accomplishments in vortex flap research are summarized. A singular feature of the vortex flap is that, throughout the range of angle of attack range, the flow type remains qualitatively unchanged. Accordingly, no large or sudden change in the aerodynamic characteristics, as happens when forcibly maintained attached flow suddenly reverts to separation, will occur with the vortex flap. Typical wind tunnel test data are presented which show the drag reduction potential of the vortex flap concept applied to a supersonic cruise airplane configuration. The new technology offers a means of aerodynamically augmenting roll-control effectiveness on slender wings at higher angles of attack by manipulating the vortex flow generated from leading edge separation. The proposed manipulator takes the form of a flap hinged at or close to the leading edge, normally retracted flush with the wing upper surface to conform to the airfoil shape.

  17. ADVANCED BYPRODUCT RECOVERY: DIRECT CATALYTIC REDUCTION OF SO2 TO ELEMENTAL SULFUR

    SciTech Connect

    Robert S. Weber

    1999-05-01

    Arthur D. Little, Inc., together with its commercialization partner, Engelhard Corporation, and its university partner Tufts, investigated a single-step process for direct, catalytic reduction of sulfur dioxide from regenerable flue gas desulfurization processes to the more valuable elemental sulfur by-product. This development built on recently demonstrated SO{sub 2}-reduction catalyst performance at Tufts University on a DOE-sponsored program and is, in principle, applicable to processing of regenerator off-gases from all regenerable SO{sub 2}-control processes. In this program, laboratory-scale catalyst optimization work at Tufts was combined with supported catalyst formulation work at Engelhard, bench-scale supported catalyst testing at Arthur D. Little and market assessments, also by Arthur D. Little. Objectives included identification and performance evaluation of a catalyst which is robust and flexible with regard to choice of reducing gas. The catalyst formulation was improved significantly over the course of this work owing to the identification of a number of underlying phenomena that tended to reduce catalyst selectivity. The most promising catalysts discovered in the bench-scale tests at Tufts were transformed into monolith-supported catalysts at Engelhard. These catalyst samples were tested at larger scale at Arthur D. Little, where the laboratory-scale results were confirmed, namely that the catalysts do effectively reduce sulfur dioxide to elemental sulfur when operated under appropriate levels of conversion and in conditions that do not contain too much water or hydrogen. Ways to overcome those limitations were suggested by the laboratory results. Nonetheless, at the end of Phase I, the catalysts did not exhibit the very stringent levels of activity or selectivity that would have permitted ready scale-up to pilot or commercial operation. Therefore, we chose not to pursue Phase II of this work which would have included further bench-scale testing

  18. Advanced turbo-prop airplane interior noise reduction-source definition

    NASA Technical Reports Server (NTRS)

    Magliozzi, B.; Brooks, B. M.

    1979-01-01

    Acoustic pressure amplitudes and phases were measured in model scale on the surface of a rigid semicylinder mounted in an acoustically treated wind tunnel near a prop-fan (an advanced turboprop with many swept blades) model. Operating conditions during the test simulated those of a prop-fan at 0.8 Mach number cruise. Acoustic pressure amplitude and phase contours were defined on the semicylinder surface. Measurements obtained without the semi-cylinder in place were used to establish the magnitude of pressure doubling for an aircraft fuselage located near a prop-fan. Pressure doubling effects were found to be 6dB at 90 deg incidence decreasing to no effect at grazing incidence. Comparisons of measurements with predictions made using a recently developed prop-fan noise prediction theory which includes linear and non-linear source terms showed good agreement in phase and in peak noise amplitude. Predictions of noise amplitude and phase contours, including pressure doubling effects derived from test, are included for a full scale prop-fan installation.

  19. Practice reduces task relevant variance modulation and forms nominal trajectory

    PubMed Central

    Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-01-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942

  20. Practice reduces task relevant variance modulation and forms nominal trajectory

    NASA Astrophysics Data System (ADS)

    Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.

  1. Practice reduces task relevant variance modulation and forms nominal trajectory.

    PubMed

    Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-01-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942

  2. 2014 U.S. Offshore Wind Market Report: Industry Trends, Technology Advancement, and Cost Reduction

    SciTech Connect

    Smith, Aaron; Stehly, Tyler; Walter Musial

    2015-09-29

    2015 has been an exciting year for the U.S. offshore wind market. After more than 15 years of development work, the U.S. has finally hit a crucial milestone; Deepwater Wind began construction on the 30 MW Block Island Wind Farm (BIWF) in April. A number of other promising projects, however, have run into economic, legal, and political headwinds, generating much speculation about the future of the industry. This slow, and somewhat painful, start to the industry is not without precedent; each country in northern Europe began with pilot-scale, proof-of-concept projects before eventually moving to larger commercial scale installations. Now, after more than a decade of commercial experience, the European industry is set to achieve a new deployment record, with more than 4 GW expected to be commissioned in 2015, with demonstrable progress towards industry-wide cost reduction goals. DWW is leveraging 25 years of European deployment experience; the BIWF combines state-of-the-art technologies such as the Alstom 6 MW turbine with U.S. fabrication and installation competencies. The successful deployment of the BIWF will provide a concrete showcase that will illustrate the potential of offshore wind to contribute to state, regional, and federal goals for clean, reliable power and lasting economic development. It is expected that this initial project will launch the U.S. industry into a phase of commercial development that will position offshore wind to contribute significantly to the electric systems in coastal states by 2030.

  3. Boundary layer drag reduction research hypotheses derived from bio-inspired surface and recent advanced applications.

    PubMed

    Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe

    2015-12-01

    Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering. PMID:26348428

  4. Recent Advances in Developing Platinum Monolayer Electrocatalysts for the O2 Reduction Reaction

    SciTech Connect

    Vukmirovic,M.B.; Sasaki, K.; Zhou, W.-P.; Li, M.; Liu, P.; Wang, J.X.; Adzic, R.R.

    2008-09-15

    For Pt, the best single-element catalyst for many reactions, the question of content and loading is exceedingly important because of its price and availability. Using platinum as a fuel-cell catalyst in automotive applications will cause an unquantifiable increase in the demand for this metal. This big obstacle for using fuel cells in electric cars must be solved by decreasing the content of Pt, which is a great challenge of electrocatalysis Over the last several years we inaugurated a new class of electrocatalysts for the oxygen reduction reaction (ORR) based on a monolayer of Pt deposited on metal or alloy carbon-supported nanoparticles. The possibility of decreasing the Pt content in the ORR catalysts down to a monolayer level has a considerable importance because this reaction requires high loadings due to its slow kinetics. The Pt-monolayer approach has several unique features and some of them are: high Pt utilization, enhanced (or decreased) activity, enhanced stability, and direct activity correlations. The synthesis of Pt monolayer (ML) electrocatalysts was facilitated by our new synthesis method which allowed us to deposit a monolayer of Pt on various metals, or alloy nanoparticles [1, 2] for the cathode electrocatalyst. In this synthesis approach Pt is laid down by the galvanically displacing a Cu monolayer, which was deposited at underpotentials in a monolayer-limited reaction on appropriate metal substrate, with Pt after immersing the electrode in a K{sub 2}PtCl{sub 4} solution.

  5. Boundary layer drag reduction research hypotheses derived from bio-inspired surface and recent advanced applications.

    PubMed

    Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe

    2015-12-01

    Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering.

  6. NASA Glenn's Advanced Subsonic Combustion Rig Supported the Ultra-Efficient Engine Technology Project's Emissions Reduction Test

    NASA Technical Reports Server (NTRS)

    Beltran, Luis R.

    2004-01-01

    The Advanced Subsonic Combustor Rig (ASCR) is NASA Glenn Research Center's unique high-pressure, high-temperature combustor facility supporting the emissions reduction element of the Ultra-Efficient Engine Technology (UEET) Project. The facility can simulate combustor inlet test conditions up to a pressure of 900 psig and a temperature of 1200 F (non-vitiated). ASCR completed three sector tests in fiscal year 2003 for General Electric, Pratt & Whitney, and Rolls-Royce North America. This will provide NASA and U.S. engine manufacturers the information necessary to develop future low-emission combustors and will help them to better understand durability and operability at these high pressures and temperatures.

  7. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Advanced noise reduction in placental ultrasound imaging using CPU and GPU: a comparative study

    NASA Astrophysics Data System (ADS)

    Zombori, G.; Ryan, J.; McAuliffe, F.; Rainford, L.; Moran, M.; Brennan, P.

    2010-03-01

    This paper presents a comparison of different implementations of 3D anisotropic diffusion speckle noise reduction technique on ultrasound images. In this project we are developing a novel volumetric calcification assessment metric for the placenta, and providing a software tool for this purpose. The tool can also automatically segment and visualize (in 3D) ultrasound data. One of the first steps when developing such a tool is to find a fast and efficient way to eliminate speckle noise. Previous works on this topic by Duan, Q. [1] and Sun, Q. [2] have proven that the 3D noise reducing anisotropic diffusion (3D SRAD) method shows exceptional performance in enhancing ultrasound images for object segmentation. Therefore we have implemented this method in our software application and performed a comparative study on the different variants in terms of performance and computation time. To increase processing speed it was necessary to utilize the full potential of current state of the art Graphics Processing Units (GPUs). Our 3D datasets are represented in a spherical volume format. With the aim of 2D slice visualization and segmentation, a "scan conversion" or "slice-reconstruction" step is needed, which includes coordinate transformation from spherical to Cartesian, re-sampling of the volume and interpolation. Combining the noise filtering and slice reconstruction in one process on the GPU, we can achieve close to real-time operation on high quality data sets without the need for down-sampling or reducing image quality. For the GPU programming OpenCL language was used. Therefore the presented solution is fully portable.

  11. Neutrino mass without cosmic variance

    NASA Astrophysics Data System (ADS)

    LoVerde, Marilena

    2016-05-01

    Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.

  12. Current advances of integrated processes combining chemical absorption and biological reduction for NO x removal from flue gas.

    PubMed

    Zhang, Shihan; Chen, Han; Xia, Yinfeng; Liu, Nan; Lu, Bi-Hong; Li, Wei

    2014-10-01

    Anthropogenic nitrogen oxides (NO x ) emitted from the fossil-fuel-fired power plants cause adverse environmental issues such as acid rain, urban ozone smoke, and photochemical smog. A novel chemical absorption-biological reduction (CABR) integrated process under development is regarded as a promising alternative to the conventional selective catalytic reduction processes for NO x removal from the flue gas because it is economic and environmentally friendly. CABR process employs ferrous ethylenediaminetetraacetate [Fe(II)EDTA] as a solvent to absorb the NO x following microbial denitrification of NO x to harmless nitrogen gas. Meanwhile, the absorbent Fe(II)EDTA is biologically regenerated to sustain the adequate NO x removal. Compared with conventional denitrification process, CABR not only enhances the mass transfer of NO from gas to liquid phase but also minimize the impact of oxygen on the microorganisms. This review provides the current advances of the development of the CABR process for NO x removal from the flue gas. PMID:25149446

  13. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    SciTech Connect

    Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.; Menzies, W.R.; Smouse, S.M.; Stallings, J.W.

    1997-12-31

    This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.

  14. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940

  15. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.

  16. Analytic variance estimates of Swank and Fano factors

    SciTech Connect

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank

    2014-07-15

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.

  17. APFBC repowering could help meet Kyoto Protocol CO{sub 2} reduction goals[Advanced Pressurized Fluidized Bed Combustion

    SciTech Connect

    Weinstein, R.E.; Tonnemacher, G.C.

    1999-07-01

    The Clinton Administration signed the 1997 Kyoto Protocol agreement that would limit US greenhouse gas emissions, of which carbon dioxide (CO{sub 2}) is the most significant. While the Kyoto Protocol has not yet been submitted to the Senate for ratification, in the past, there have been few proposed environmental actions that had continued and wide-spread attention of the press and environmental activists that did not eventually lead to regulation. Since the Kyoto Protocol might lead to future regulation, its implications need investigation by the power industry. Limiting CO{sub 2} emissions affects the ability of the US to generate reliable, low cost electricity, and has tremendous potential impact on electric generating companies with a significant investment in coal-fired generation, and on their customers. This paper explores the implications of reducing coal plant CO{sub 2} by various amounts. The amount of reduction for the US that is proposed in the Kyoto Protocol is huge. The Kyoto Protocol would commit the US to reduce its CO{sub 2} emissions to 7% below 1990 levels. Since 1990, there has been significant growth in US population and the US economy driving carbon emissions 34% higher by year 2010. That means CO{sub 2} would have to be reduced by 30.9%, which is extremely difficult to accomplish. The paper tells why. There are, however, coal-based technologies that should be available in time to make significant reductions in coal-plant CO{sub 2} emissions. Th paper focuses on one plant repowering method that can reduce CO{sub 2} per kWh by 25%, advanced circulating pressurized fluidized bed combustion combined cycle (APFBC) technology, based on results from a recent APFBC repowering concept evaluation of the Carolina Power and Light Company's (CP and L) L.V. Sutton steam station. The replacement of the existing 50-year base of power generating units needed to meet proposed Kyoto Protocol CO{sub 2} reduction commitments would be a massive undertaking. It is

  18. Variance analysis. Part I, Extending flexible budget variance analysis to acuity.

    PubMed

    Finkler, S A

    1991-01-01

    The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002

  19. Warped functional analysis of variance.

    PubMed

    Gervini, Daniel; Carter, Patrick A

    2014-09-01

    This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.

  20. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this...

  1. Estimation of Variance Components Using Computer Packages.

    ERIC Educational Resources Information Center

    Chastain, Robert L.; Willson, Victor L.

    Generalizability theory is based upon analysis of variance (ANOVA) and requires estimation of variance components for the ANOVA design under consideration in order to compute either G (Generalizability) or D (Decision) coefficients. Estimation of variance components has a number of alternative methods available using SAS, BMDP, and ad hoc…

  2. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...

  3. 45 CFR 156.460 - Reduction of enrollee's share of premium to account for advance payments of the premium tax credit.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Health Insurance Issuer Responsibilities With Respect to Advance Payments of the Premium Tax Credit and Cost-Sharing Reductions §...

  4. 45 CFR 156.460 - Reduction of enrollee's share of premium to account for advance payments of the premium tax credit.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Health Insurance Issuer Responsibilities With Respect to Advance Payments of the Premium Tax Credit and Cost-Sharing Reductions §...

  5. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    SciTech Connect

    Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.; Ibrahim, Ahmad M.; Daily, Charles R.; Evans, Thomas M.; Wagner, John C.; Johnson, Jeffrey O.; Grove, Robert E.

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.

  6. Global variance reduction for Monte Carlo reactor physics calculations

    SciTech Connect

    Zhang, Q.; Abdel-Khalik, H. S.

    2013-07-01

    Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 10{sup 3}-10{sup 5} times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)

  7. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  8. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    PubMed

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  9. Restricted sample variance reduces generalizability.

    PubMed

    Lakes, Kimberley D

    2013-06-01

    One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context. PMID:23205627

  10. Simulation testing of unbiasedness of variance estimators

    USGS Publications Warehouse

    Link, W.A.

    1993-01-01

    In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.

  11. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  12. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  13. Multireader multicase variance analysis for binary data.

    PubMed

    Gallas, Brandon D; Pennello, Gene A; Myers, Kyle J

    2007-12-01

    Multireader multicase (MRMC) variance analysis has become widely utilized to analyze observer studies for which the summary measure is the area under the receiver operating characteristic (ROC) curve. We extend MRMC variance analysis to binary data and also to generic study designs in which every reader may not interpret every case. A subset of the fundamental moments central to MRMC variance analysis of the area under the ROC curve (AUC) is found to be required. Through multiple simulation configurations, we compare our unbiased variance estimates to naïve estimates across a range of study designs, average percent correct, and numbers of readers and cases.

  14. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  15. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  16. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  17. 48 CFR 970.5232-1 - Reduction or suspension of advance, partial, or progress payments upon finding of substantial...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...

  18. 48 CFR 970.5232-1 - Reduction or suspension of advance, partial, or progress payments upon finding of substantial...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...

  19. 48 CFR 970.5232-1 - Reduction or suspension of advance, partial, or progress payments upon finding of substantial...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...

  20. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...

  1. Variance Design and Air Pollution Control

    ERIC Educational Resources Information Center

    Ferrar, Terry A.; Brownstein, Alan B.

    1975-01-01

    Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…

  2. On Some Representations of Sample Variance

    ERIC Educational Resources Information Center

    Joarder, Anwar H.

    2002-01-01

    The usual formula for variance depending on rounding off the sample mean lacks precision, especially when computer programs are used for the calculation. The well-known simplification of the total sums of squares does not always give benefit. Since the variance of two observations is easily calculated without the use of a sample mean, and the…

  3. Save money by understanding variance and tolerancing.

    PubMed

    Stuart, K

    2007-01-01

    Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.

  4. Nonlinear Epigenetic Variance: Review and Simulations

    ERIC Educational Resources Information Center

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  5. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...

  6. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...

  7. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...

  8. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  9. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... variance to the Administrator at least 30 days before the variance expires. (b) The renewal request...

  10. Functional analysis of variance for association studies.

    PubMed

    Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing

    2014-01-01

    While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.

  11. Functional analysis of variance for association studies.

    PubMed

    Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing

    2014-01-01

    While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256

  12. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  13. A Variance Based Active Learning Approach for Named Entity Recognition

    NASA Astrophysics Data System (ADS)

    Hassanzadeh, Hamed; Keyvanpour, Mohammadreza

    The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.

  14. Encoding of natural sounds by variance of the cortical local field potential.

    PubMed

    Ding, Nai; Simon, Jonathan Z; Shamma, Shihab A; David, Stephen V

    2016-06-01

    Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex. PMID:26912594

  15. Reduction of organic trace compounds and fresh water consumption by recovery of advanced oxidation processes treated industrial wastewater.

    PubMed

    Bierbaum, S; Öller, H-J; Kersten, A; Klemenčič, A Krivograd

    2014-01-01

    Ozone (O(3)) has been used successfully in advanced wastewater treatment in paper mills, other sectors and municipalities. To solve the water problems of regions lacking fresh water, wastewater treated by advanced oxidation processes (AOPs) can substitute fresh water in highly water-consuming industries. Results of this study have shown that paper strength properties are not impaired and whiteness is slightly impaired only when reusing paper mill wastewater. Furthermore, organic trace compounds are becoming an issue in the German paper industry. The results of this study have shown that AOPs are capable of improving wastewater quality by reducing organic load, colour and organic trace compounds.

  16. Recent advances in the efficient reduction of graphene oxide and its application as energy storage electrode materials

    NASA Astrophysics Data System (ADS)

    Kuila, Tapas; Mishra, Ananta Kumar; Khanra, Partha; Kim, Nam Hoon; Lee, Joong Hee

    2012-12-01

    Efficient reduction of graphene oxide (GO) by chemical, thermal, electrochemical, and photo-irradiation techniques has been reviewed. Particular emphasis has been directed towards the proposed reduction mechanisms of GO by different reducing agents and techniques. The advantages of using different kinds of reducing agents on the basis of their availability, cost-effectiveness, toxicity, and easy product isolation processes have also been studied extensively. We provide a detailed description of the improvement in physiochemical properties of reduced GO (RGO) compared to pure GO. For example, the electrical conductivity and electrochemical performance of electrochemically obtained RGO are much better than those of chemically or thermally RGO materials. We provide examples of how RGO has been used as supercapacitor electrode materials. Specific capacitance of GO increases after reduction and the value has been reported to be 100-300 F g-1. We conclude by proposing new environmentally friendly types of reducing agents that can efficiently remove oxygen functionalities from the surface of GO.

  17. Nonorthogonal Analysis of Variance Programs: An Evaluation.

    ERIC Educational Resources Information Center

    Hosking, James D.; Hamer, Robert M.

    1979-01-01

    Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)

  18. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... and evidence of the best available treatment technology and techniques. (2) Economic and legal factors... water in the case of an excessive rise in the contaminant level for which the variance is requested....

  19. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  20. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  1. 7 CFR 205.290 - Temporary variances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT... notify each production or handling operation it certifies to which the temporary variance applies....

  2. Reducing variance in batch partitioning measurements

    SciTech Connect

    Mariner, Paul E.

    2010-08-11

    The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

  3. 13 CFR 307.22 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....

  4. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  5. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  6. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  7. Variance Components in Discrete Force Production Tasks

    PubMed Central

    SKM, Varadhan; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2010-01-01

    The study addresses the relationships between task parameters and two components of variance, “good” and “bad”, during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does (“bad”) and does not (“good”) affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the “bad” variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. “Good” variance scaled linearly with force magnitude and did not depend on force rate. “Bad” variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic

  8. 40 CFR 142.60 - Variances from the maximum contaminant level for total trihalomethanes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... result in a marginal reduction in TTHM for the system. If, upon application by a system for a variance... determination as to the availability and effectiveness of such treatment methods shall be based upon studies by... economically reasonable, and that the TTHM reductions obtained will be commensurate with the costs...

  9. 45 CFR 156.480 - Oversight of the administration of the cost-sharing reductions and advance payments of the...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Health Insurance Issuer Responsibilities With Respect to Advance Payments of the Premium Tax Credit and Cost... 45 Public Welfare 1 2014-10-01 2014-10-01 false Oversight of the administration of the...

  10. Variational bayesian method of estimating variance components.

    PubMed

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  11. 45 CFR 156.440 - Plans eligible for advance payments of the premium tax credit and cost-sharing reductions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... tax credit and cost-sharing reductions. 156.440 Section 156.440 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Health Insurance...

  12. 45 CFR 156.215 - Advance payments of the premium tax credit and cost-sharing reduction standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... cost-sharing reduction standards. 156.215 Section 156.215 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum...

  13. 45 CFR 156.440 - Plans eligible for advance payments of the premium tax credit and cost-sharing reductions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... tax credit and cost-sharing reductions. 156.440 Section 156.440 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Health Insurance...

  14. Can currently available advanced combustion biomass cook-stoves provide health relevant exposure reductions? Results from initial assessment of select commercial models in India.

    PubMed

    Sambandam, Sankar; Balakrishnan, Kalpana; Ghosh, Santu; Sadasivam, Arulselvan; Madhav, Satish; Ramasamy, Rengaraj; Samanta, Maitreya; Mukhopadhyay, Krishnendu; Rehman, Hafeez; Ramanathan, Veerabhadran

    2015-03-01

    Household air pollution from use of solid fuels is a major contributor to the national burden of disease in India. Currently available models of advanced combustion biomass cook-stoves (ACS) report significantly higher efficiencies and lower emissions in the laboratory when compared to traditional cook-stoves, but relatively little is known about household level exposure reductions, achieved under routine conditions of use. We report results from initial field assessments of six commercial ACS models from the states of Tamil Nadu and Uttar Pradesh in India. We monitored 72 households (divided into six arms to each receive an ACS model) for 24-h kitchen area concentrations of PM2.5 and CO before and (1-6 months) after installation of the new stove together with detailed information on fixed and time-varying household characteristics. Detailed surveys collected information on user perceptions regarding acceptability for routine use. While the median percent reductions in 24-h PM2.5 and CO concentrations ranged from 2 to 71% and 10-66%, respectively, concentrations consistently exceeded WHO air quality guideline values across all models raising questions regarding the health relevance of such reductions. Most models were perceived to be sub-optimally designed for routine use often resulting in inappropriate and inadequate levels of use. Household concentration reductions also run the risk of being compromised by high ambient backgrounds from community level solid-fuel use and contributions from surrounding fossil fuel sources. Results indicate that achieving health relevant exposure reductions in solid-fuel using households will require integration of emissions reductions with ease of use and adoption at community scale, in cook-stove technologies. Imminent efforts are also needed to accelerate the progress towards cleaner fuels. PMID:25293811

  15. Some Investigations on Hardness of Investment Casting Process After Advancements in Shell Moulding for Reduction in Cycle Time

    NASA Astrophysics Data System (ADS)

    Singh, R.; Mahajan, V.

    2014-07-01

    In the present work surface hardness investigations have been made on acrylonitrile butadiene styrene (ABS) pattern based investment castings after advancements in shell moulding for replication of biomedical implants. For the present study, a hip joint, made of ABS material, was fabricated as a master pattern by fused deposition modelling (FDM). After preparation of master pattern, mold was prepared by deposition of primary (1°), secondary (2°) and tertiary (3°) coatings with the addition of nylon fibre (1-2 cm in length of 1.5D). This study outlines the surface hardness mechanism for cast component prepared from ABS master pattern after advancement in shell moulding. The results of study highlight that during shell production, fibre modified shells have a much reduced drain time. Further the results are supported by cooling rate and micro structure analysis of casting.

  16. [ADVANCE-ON Trial; How to Achieve Maximum Reduction of Mortality in Patients With Type 2 Diabetes].

    PubMed

    Kanorskiĭ, S G

    2015-01-01

    Of 10,261 patients with type 2 diabetes who survived to the end of a randomized ADVANCE trial 83% were included in the ADVANCE-ON project for observation for 6 years. The difference in the level of blood pressure which had been achieved during 4.5 years of within trial treatment with fixed perindopril/indapamide combination quickly vanished but significant decrease of total and cardiovascular mortality in the group of patients treated with this combination for 4.5 years was sustained during 6 years of post-trial follow-up. The results can be related to gradually weakening protective effect of perindopril/indapamide combination on cardiovascular system, and are indicative of the expedience of long-term use of this antihypertensive therapy for maximal lowering of mortality of patients with diabetes. PMID:26164995

  17. [Bronchoscopic lung volume reduction (BLVR) in advanced pulmonary emphysema: dreams of the future or much ado about nothing?].

    PubMed

    Stanzel, F

    2012-01-01

    Bronchoscopic lung volume reduction (BLVR) is a rapidly developing area and at present it is being intensively evaluated and discussed. There is a great interest in developing new treatment modalities that can reduce lung volume and air trapping without the risk of a surgical intervention. The different techniques of BLVR are characterised by lower morbidity and mortality, but by a more limited effect too. The placement of valves leads to blockade of the airway and sometimes to absorption atelectasis. The valves have been most intensively evaluated and are frequently applied. Beside the blocking devices there are partially blocking or deforming devices available as coils that are introduced in heterogeneous emphysema. Irreversible procedures such as polymeric lung volume reduction or thermal vapour ablation are used too. The creation of airway bypasses to allow trapped air to escape is mainly employed in homogenous emphysema. Following such by-pass creation there is an increase of lung function tests and a reduction of dyspnea, but only for a limited time. The bypass procedure has disappeared from bronchoscopy units completely. We give a review of the recent developments regarding BLVR and the state of the art.

  18. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    SciTech Connect

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-23

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  19. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    NASA Astrophysics Data System (ADS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  20. Reduced Variance for Material Sources in Implicit Monte Carlo

    SciTech Connect

    Urbatsch, Todd J.

    2012-06-25

    Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.

  1. Variance estimation for nucleotide substitution models.

    PubMed

    Chen, Weishan; Wang, Hsiuying

    2015-09-01

    The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.

  2. A noise variance estimation approach for CT

    NASA Astrophysics Data System (ADS)

    Shen, Le; Jin, Xin; Xing, Yuxiang

    2012-10-01

    The Poisson-like noise model has been widely used for noise suppression and image reconstruction in low dose computed tomography. Various noise estimation and suppression approaches have been developed and studied to enhance the image quality. Among them, the recently proposed generalized Anscombe transform (GAT) has been utilized to stabilize the variance of Poisson-Gaussian noise. In this paper, we present a variance estimation approach using GAT. After the transform, the projection data is denoised conventionally with an assumption that the noise variance is uniformly equals to 1. The difference of the original and the denoised projection is treated as pure noise and the global variance σ2 can be estimated from the residual difference. Thus, the final denoising step with the estimated σ2 is performed. The proposed approach is verified on a cone-beam CT system and demonstrated to obtain a more accurate estimation of the actual parameter. We also examine FBP algorithm with the two-step noise suppression in the projection domain using the estimated noise variance. Reconstruction results with simulated and practical projection data suggest that the presented approach could be effective in practical imaging applications.

  3. N sub 2 O formation from advanced NO sub x control processes (selective non-catalytic reduction and coal reburning)

    SciTech Connect

    Montgomery, T.A.; Martz, T.D.; Quartucy, G.C.; Muzio, L.J. ); Sheldon, M.S.; Cole, J.A.; Kramlich, J.C. ); Samuelsen, G.S.; Reddy, V. )

    1991-04-01

    The current work addressed the potential of N{sub 2}O production from two NO{sub x} reduction techniques: selective non-catalytic NO{sub x} reduction (SNCR processes) and reburning with pulverized coal. The effects of SNCR processes (utilizing ammonia, urea, and cyanuric acid injection) and reburning processes (with bituminous and lignite coals) upon NO{sub x} and N{sub 2}O levels were evaluated. Pilot scale testing and chemical kinetic modeling were used to characterize the N{sub 2} production from SNCR processes over a range of process parameters. The data show that the evaluated SNCR processes (ammonia, urea, and cyanuric acid injection) produced some N{sub 2}O as a by-product. Ammonia injection produced the lowest levels of N{sub 2}O; less than 4% of the reduced NO{sub x} was converted to N{sub 2}O. Cyanuric acid injection produced the highest levels; N{sub 2}O increases ranged between 12--40% of the reduced NO{sub x}. The conversion of NO{sub x} to N{sub 2}O with urea injection ranged from 7--25%. Pilot scale testing was used to characterize the N{sub 2}O production from reburning processes with coal over a range of process parameters. Parameters included: coal type, firing rate, initial NO level, and reburn zone stoichiometry. Data show that N{sub 2}O is not a major product during coal reburning processes for NO{sub x} reduction. 56 figs., 13 tabs.

  4. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  5. Advanced Subsonic Technology (AST) Separate-Flow High-Bypass Ratio Nozzle Noise Reduction Program Test Report

    NASA Technical Reports Server (NTRS)

    Low, John K. C.; Schweiger, Paul S.; Premo, John W.; Barber, Thomas J.; Saiyed, Naseem (Technical Monitor)

    2000-01-01

    NASA s model-scale nozzle noise tests show that it is possible to achieve a 3 EPNdB jet noise reduction with inwardfacing chevrons and flipper-tabs installed on the primary nozzle and fan nozzle chevrons. These chevrons and tabs are simple devices and are easy to be incorporated into existing short duct separate-flow nonmixed nozzle exhaust systems. However, these devices are expected to cause some small amount of thrust loss relative to the axisymmetric baseline nozzle system. Thus, it is important to have these devices further tested in a calibrated nozzle performance test facility to quantify the thrust performances of these devices. The choice of chevrons or tabs for jet noise suppression would most likely be based on the results of thrust loss performance tests to be conducted by Aero System Engineering (ASE) Inc. It is anticipated that the most promising concepts identified from this program will be validated in full scale engine tests at both Pratt & Whitney and Allied-Signal, under funding from NASA s Engine Validation of Noise Reduction Concepts (EVNRC) programs. This will bring the technology readiness level to the point where the jet noise suppression concepts could be incorporated with high confidence into either new or existing turbofan engines having short-duct, separate-flow nacelles.

  6. Wave propagation analysis using the variance matrix.

    PubMed

    Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S

    2014-10-01

    The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243

  7. Simulated flight acoustic investigation of treated ejector effectiveness on advanced mechanical suppresors for high velocity jet noise reduction

    NASA Technical Reports Server (NTRS)

    Brausch, J. F.; Motsinger, R. E.; Hoerst, D. J.

    1986-01-01

    Ten scale-model nozzles were tested in an anechoic free-jet facility to evaluate the acoustic characteristics of a mechanically suppressed inverted-velocity-profile coannular nozzle with an accoustically treated ejector system. The nozzle system used was developed from aerodynamic flow lines evolved in a previous contract, defined to incorporate the restraints imposed by the aerodynamic performance requirements of an Advanced Supersonic Technology/Variable Cycle Engine system through all its mission phases. Accoustic data of 188 test points were obtained, 87 under static and 101 under simulated flight conditions. The tests investigated variables of hardwall ejector application to a coannular nozzle with 20-chute outer annular suppressor, ejector axial positioning, treatment application to ejector and plug surfaces, and treatment design. Laser velocimeter, shadowgraph photograph, aerodynamic static pressure, and temperature measurement were acquired on select models to yield diagnositc information regarding the flow field and aerodynamic performance characteristics of the nozzles.

  8. Giardia duodenalis: Number and Fluorescence Reduction Caused by the Advanced Oxidation Process (H2O2/UV)

    PubMed Central

    Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; dos Santos, Luciana Urbano

    2014-01-01

    This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L−1 and a UV dose (λ = 254 nm) of 5,480 mJcm−2. The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number of G. duodenalis cysts in synthetic and surface water and was most effective in reducing the fluorescence of the cyst walls that were present in the surface water. The AOP showed a higher deleterious potential for G. duodenalis cysts than either peroxidation (H2O2) or photolysis (UV) processes alone. PMID:27379301

  9. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  10. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  11. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  12. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  13. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  14. Regression Calibration with Heteroscedastic Error Variance

    PubMed Central

    Spiegelman, Donna; Logan, Roger; Grove, Douglas

    2011-01-01

    The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187

  15. 18 CFR 1304.408 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... whether a proposed structure or other regulated activity would adversely impact navigation, flood...

  16. 18 CFR 1304.408 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... whether a proposed structure or other regulated activity would adversely impact navigation, flood...

  17. Multiple Comparison Procedures when Population Variances Differ.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Lee, JaeShin

    A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…

  18. Understanding gender variance in children and adolescents.

    PubMed

    Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A

    2014-06-01

    Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support. PMID:24972420

  19. Understanding gender variance in children and adolescents.

    PubMed

    Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A

    2014-06-01

    Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support.

  20. Videotape Project in Child Variance. Final Report.

    ERIC Educational Resources Information Center

    Morse, William C.; Smith, Judith M.

    The design, production, dissemination, and evaluation of a series of videotaped training packages designed to enable teachers, parents, and paraprofessionals to interpret child variance in light of personal and alternative perspectives of behavior are discussed. The goal of each package is to highlight unique contributions of different theoretical…

  1. Variance Anisotropy of Solar Wind fluctuations

    NASA Astrophysics Data System (ADS)

    Oughton, S.; Matthaeus, W. H.; Wan, M.; Osman, K.

    2013-12-01

    Solar wind observations at MHD scales indicate that the energy associated with velocity and magnetic field fluctuations transverse to the mean magnetic field is typically much larger than that associated with parallel fluctuations [eg, 1]. This is often referred to as variance anisotropy. Various explanations for it have been suggested, including that the fluctuations are predominantly shear Alfven waves [1] and that turbulent dynamics leads to such states [eg, 2]. Here we investigate the origin and strength of such variance anisotropies, using spectral method simulations of the compressible (polytropic) 3D MHD equations. We report on results from runs with initial conditions that are either (i) broadband turbulence or (ii) fluctuations polarized in the same sense as shear Alfven waves. The dependence of the variance anisotropy on the plasma beta and Mach number is examined [3], along with the timescale for any variance anisotropy to develop. Implications for solar wind fluctuations will be discussed. References: [1] Belcher, J. W. and Davis Jr., L. (1971), J. Geophys. Res., 76, 3534. [2] Matthaeus, W. H., Ghosh, S., Oughton, S. and Roberts, D. A. (1996), J. Geophys. Res., 101, 7619. [3] Smith, C. W., B. J. Vasquez and K. Hamilton (2006), J. Geophys. Res., 111, A09111.

  2. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13... from the standard under both the Longshoremen's and Harbor Workers' Compensation Act and the...

  3. 7 CFR 205.290 - Temporary variances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION...

  4. Number variance for arithmetic hyperbolic surfaces

    NASA Astrophysics Data System (ADS)

    Luo, W.; Sarnak, P.

    1994-03-01

    We prove that the number variance for the spectrum of an arithmetic surface is highly nonrigid in part of the universal range. In fact it is close to having a Poisson behavior. This fact was discovered numerically by Schmit, Bogomolny, Georgeot and Giannoni. It has its origin in the high degeneracy of the length spectrum, first observed by Selberg.

  5. 7 CFR 205.290 - Temporary variances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM...

  6. Formative Use of Intuitive Analysis of Variance

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  7. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... employment service complaint procedures set forth at §§ 658.421 (i) and (j), 658.422 and 658.423 of...

  8. 78 FR 14122 - Revocation of Permanent Variances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-04

    ... OSHA's scaffolds standards for construction (77 FR 46948). Today's notice revoking the variances takes..., construction, and use of scaffolds (61 FR 46026). In the preamble to the final rule, OSHA stated that it was... for tank scaffolds under the general provisions of the final rule (see 61 FR 46033). In this...

  9. A concise guide to sustainable PEMFCs: recent advances in improving both oxygen reduction catalysts and proton exchange membranes.

    PubMed

    Scofield, Megan E; Liu, Haiqing; Wong, Stanislaus S

    2015-08-21

    The rising interest in fuel cell vehicle technology (FCV) has engendered a growing need and realization to develop rational chemical strategies to create highly efficient, durable, and cost-effective fuel cells. Specifically, technical limitations associated with the major constituent components of the basic proton exchange membrane fuel cell (PEMFC), namely the cathode catalyst and the proton exchange membrane (PEM), have proven to be particularly demanding to overcome. Therefore, research trends within the community in recent years have focused on (i) accelerating the sluggish kinetics of the catalyst at the cathode and (ii) minimizing overall Pt content, while simultaneously (a) maximizing activity and durability as well as (b) increasing membrane proton conductivity without causing any concomitant loss in either stability or as a result of damage due to flooding. In this light, as an example, high temperature PEMFCs offer a promising avenue to improve the overall efficiency and marketability of fuel cell technology. In this Critical Review, recent advances in optimizing both cathode materials and PEMs as well as the future and peculiar challenges associated with each of these systems will be discussed.

  10. Advanced noise reduction techniques for ultra-low phase noise optical-to-microwave division with femtosecond fiber combs.

    PubMed

    Zhang, Wei; Xu, Zhenyu; Lours, Michel; Boudot, Rodolphe; Kersalé, Yann; Luiten, Andre N; Le Coq, Yann; Santarelli, Giorgio

    2011-05-01

    We report what we believe to be the lowest phase noise optical-to-microwave frequency division using fiber-based femtosecond optical frequency combs: a residual phase noise of -120 dBc/Hz at 1 Hz offset from an 11.55 GHz carrier frequency. Furthermore, we report a detailed investigation into the fundamental noise sources which affect the division process itself. Two frequency combs with quasi-identical configurations are referenced to a common ultrastable cavity laser source. To identify each of the limiting effects, we implement an ultra-low noise carrier-suppression measurement system, which avoids the detection and amplification noise of more conventional techniques. This technique suppresses these unwanted sources of noise to very low levels. In the Fourier frequency range of ∼200 Hz to 100 kHz, a feed-forward technique based on a voltage-controlled phase shifter delivers a further noise reduction of 10 dB. For lower Fourier frequencies, optical power stabilization is implemented to reduce the relative intensity noise which causes unwanted phase noise through power-to-phase conversion in the detector. We implement and compare two possible control schemes based on an acousto-optical modulator and comb pump current. We also present wideband measurements of the relative intensity noise of the fiber comb. PMID:21622045

  11. Advanced oxygen reduction reaction catalyst based on nitrogen and sulfur co-doped graphene in alkaline medium.

    PubMed

    Li, Yongfeng; Li, Meng; Jiang, Liqing; Lin, Lin; Cui, Lili; He, Xingquan

    2014-11-14

    A novel nitrogen and sulfur co-doped graphene (N-S-G) catalyst for oxygen reduction reaction (ORR) has been prepared by pyrolysing graphite oxide and poly[3-amino-5-mercapto-1,2,4-triazole] composite (PAMTa). The atomic percentage of nitrogen and sulfur for the prepared N-S-G can be adjusted by controlling the pyrolysis temperature. Furthermore, the catalyst pyrolysed at 1000 °C, denoted N-S-G 1000, exhibits the highest catalytic activity for ORR, which displays the highest content of graphitic-N and thiophene-S among all the pyrolysed samples. The electrocatalytic performance of N-S-G 1000 is significantly better than that of PAMTa and reduced graphite oxide composite. Remarkably, the N-S-G 1000 catalyst is comparable with Pt/C in terms of the onset and half-wave potentials, and displays larger kinetic limiting current density and better methanol tolerance and stability than Pt/C for ORR in an alkaline medium. PMID:25255312

  12. Resampling analysis of participant variance to improve the efficiency of sensor modeling perception experiments

    NASA Astrophysics Data System (ADS)

    O'Connor, John D.; Hixson, Jonathan; McKnight, Patrick; Peterson, Matthew S.; Parasuraman, Raja

    2010-04-01

    Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) sensor models, such as NV Therm IP, are developed through perception experiments that investigate phenomena associated with sensor performance (e.g. sampling, noise, sensitivity). A standardized laboratory perception testing method developed in the mid-1990's has been responsible for advances in sensor modeling that are supported by field sensor performance experiments.1 The number of participants required to yield dependable results for these experiments could not be estimated because the variance in performance due to participant differences was not known. NVESD and George Mason University (GMU) scientists measured the contribution of participant variance within the overall experimental variance for 22 individuals each exposed to 1008 stimuli. Results of the analysis indicate that the total participant contribution to overall experimental variance was between 1% and 2%.

  13. Advances in biotreatment of acid mine drainage and biorecovery of metals: 2. Membrane bioreactor system for sulfate reduction.

    PubMed

    Tabak, Henry H; Govind, Rakesh

    2003-12-01

    Several biotreatmemt techniques for sulfate conversion by the sulfate reducing bacteria (SRB) have been proposed in the past, however few of them have been practically applied to treat sulfate containing acid mine drainage (AMD). This research deals with development of an innovative polypropylene hollow fiber membrane bioreactor system for the treatment of acid mine water from the Berkeley Pit, Butte, MT, using hydrogen consuming SRB biofilms. The advantages of using the membrane bioreactor over the conventional tall liquid phase sparged gas bioreactor systems are: large microporous membrane surface to the liquid phase; formation of hydrogen sulfide outside the membrane, preventing the mixing with the pressurized hydrogen gas inside the membrane; no requirement of gas recycle compressor; membrane surface is suitable for immobilization of active SRB, resulting in the formation of biofilms, thus preventing washout problems associated with suspended culture reactors; and lower operating costs in membrane bioreactors, eliminating gas recompression and gas recycle costs. Information is provided on sulfate reduction rate studies and on biokinetic tests with suspended SRB in anaerobic digester sludge and sediment master culture reactors and with SRB biofilms in bench-scale SRB membrane bioreactors. Biokinetic parameters have been determined using biokinetic models for the master culture and membrane bioreactor systems. Data are presented on the effect of acid mine water sulfate loading at 25, 50, 75 and 100 ml/min in scale-up SRB membrane units, under varied temperatures (25, 35 and 40 degrees C) to determine and optimize sulfate conversions for an effective AMD biotreatment. Pilot-scale studies have generated data on the effect of flow rates of acid mine water (MGD) and varied inlet sulfate concentrations in the influents on the resultant outlet sulfate concentration in the effluents and on the number of SRB membrane modules needed for the desired sulfate conversion in

  14. Advances in biotreatment of acid mine drainage and biorecovery of metals: 2. Membrane bioreactor system for sulfate reduction.

    PubMed

    Tabak, Henry H; Govind, Rakesh

    2003-12-01

    Several biotreatmemt techniques for sulfate conversion by the sulfate reducing bacteria (SRB) have been proposed in the past, however few of them have been practically applied to treat sulfate containing acid mine drainage (AMD). This research deals with development of an innovative polypropylene hollow fiber membrane bioreactor system for the treatment of acid mine water from the Berkeley Pit, Butte, MT, using hydrogen consuming SRB biofilms. The advantages of using the membrane bioreactor over the conventional tall liquid phase sparged gas bioreactor systems are: large microporous membrane surface to the liquid phase; formation of hydrogen sulfide outside the membrane, preventing the mixing with the pressurized hydrogen gas inside the membrane; no requirement of gas recycle compressor; membrane surface is suitable for immobilization of active SRB, resulting in the formation of biofilms, thus preventing washout problems associated with suspended culture reactors; and lower operating costs in membrane bioreactors, eliminating gas recompression and gas recycle costs. Information is provided on sulfate reduction rate studies and on biokinetic tests with suspended SRB in anaerobic digester sludge and sediment master culture reactors and with SRB biofilms in bench-scale SRB membrane bioreactors. Biokinetic parameters have been determined using biokinetic models for the master culture and membrane bioreactor systems. Data are presented on the effect of acid mine water sulfate loading at 25, 50, 75 and 100 ml/min in scale-up SRB membrane units, under varied temperatures (25, 35 and 40 degrees C) to determine and optimize sulfate conversions for an effective AMD biotreatment. Pilot-scale studies have generated data on the effect of flow rates of acid mine water (MGD) and varied inlet sulfate concentrations in the influents on the resultant outlet sulfate concentration in the effluents and on the number of SRB membrane modules needed for the desired sulfate conversion in

  15. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... submits concurrently— (1) A request for the variance that documents to his satisfaction that the facility is unable to meet the time requirements for which the variance is requested; and (2) A revised...

  16. Variant evolutionary trees under phenotypic variance.

    PubMed

    Nishimura, Kinya; Isoda, Yutaka

    2004-01-01

    Evolutionary branching, which is a coevolutionary phenomenon of the development of two or more distinctive traits from a single trait in a population, is the issue of recent studies on adaptive dynamics. In previous studies, it was revealed that trait variance is a minimum requirement for evolutionary branching, and that it does not play an important role in the formation of an evolutionary pattern of branching. Here we demonstrate that the trait evolution exhibits various evolutionary branching paths starting from an identical initial trait to different evolutional terminus traits as determined by only changing the assumption of trait variance. The key feature of this phenomenon is the topological configuration of equilibria and the initial point in the manifold of dimorphism from which dimorphic branches develop. This suggests that the existing monomorphic or polymorphic set in a population is not an unique inevitable consequence of an identical initial phenotype.

  17. Analysis of Variance of Multiply Imputed Data.

    PubMed

    van Ginkel, Joost R; Kroonenberg, Pieter M

    2014-01-01

    As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.

  18. Analysis of variance of microarray data.

    PubMed

    Ayroles, Julien F; Gibson, Greg

    2006-01-01

    Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.

  19. PHD filtering with localised target number variance

    NASA Astrophysics Data System (ADS)

    Delande, Emmanuel; Houssineau, Jérémie; Clark, Daniel

    2013-05-01

    Mahler's Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget detection and tracking problem by propagating a mean density of the targets in any region of the state space. However, when retrieving some local evidence on the target presence becomes a critical component of a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a first implementation of a PHD filter that also includes an estimation of localised variance in the target number following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from a multiple-target scenario.

  20. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  1. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  2. Uses and abuses of analysis of variance.

    PubMed Central

    Evans, S J

    1983-01-01

    Analysis of variance is a term often quoted to explain the analysis of data in experiments and clinical trials. The relevance of its methodology to clinical trials is shown and an explanation of the principles of the technique is given. The assumptions necessary are examined and the problems caused by their violation are discussed. The dangers of misuse are given with some suggestions for alternative approaches. PMID:6347228

  3. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  4. Hypothesis exploration with visualization of variance

    PubMed Central

    2014-01-01

    Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666

  5. Variance of gene expression identifies altered network constraints in neurological disease.

    PubMed

    Mar, Jessica C; Matigian, Nicholas A; Mackay-Sim, Alan; Mellick, George D; Sue, Carolyn M; Silburn, Peter A; McGrath, John J; Quackenbush, John; Wells, Christine A

    2011-08-01

    Gene expression analysis has become a ubiquitous tool for studying a wide range of human diseases. In a typical analysis we compare distinct phenotypic groups and attempt to identify genes that are, on average, significantly different between them. Here we describe an innovative approach to the analysis of gene expression data, one that identifies differences in expression variance between groups as an informative metric of the group phenotype. We find that genes with different expression variance profiles are not randomly distributed across cell signaling networks. Genes with low-expression variance, or higher constraint, are significantly more connected to other network members and tend to function as core members of signal transduction pathways. Genes with higher expression variance have fewer network connections and also tend to sit on the periphery of the cell. Using neural stem cells derived from patients suffering from Schizophrenia (SZ), Parkinson's disease (PD), and a healthy control group, we find marked differences in expression variance in cell signaling pathways that shed new light on potential mechanisms associated with these diverse neurological disorders. In particular, we find that expression variance of core networks in the SZ patient group was considerably constrained, while in contrast the PD patient group demonstrated much greater variance than expected. One hypothesis is that diminished variance in SZ patients corresponds to an increased degree of constraint in these pathways and a corresponding reduction in robustness of the stem cell networks. These results underscore the role that variation plays in biological systems and suggest that analysis of expression variance is far more important in disease than previously recognized. Furthermore, modeling patterns of variability in gene expression could fundamentally alter the way in which we think about how cellular networks are affected by disease processes.

  6. Minimum variance brain source localization for short data sequences.

    PubMed

    Ravan, Maryam; Reilly, James P; Hasey, Gary

    2014-02-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second-order statistics often fail when the number of samples of the recorded data sequences is small in comparison to the number of electrodes. This condition is particularly relevant when measuring evoked potentials. Due to the correlated background EEG/MEG signal, an adaptive approach to localization is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This reduction results in decreased resolution and accuracy of the estimated source configuration. This paper develops and tests a new multistage adaptive processing technique based on the minimum variance beamformer for brain source localization that has been previously used in the radar statistical signal processing context. This processing, referred to as the fast fully adaptive (FFA) approach, can significantly reduce the required sample support, while still preserving all available DoFs. To demonstrate the performance of the FFA approach in the limited data scenario, simulation and experimental results are compared with two previous beamforming approaches; i.e., the fully adaptive minimum variance beamforming method and the beamspace beamforming method. Both simulation and experimental results demonstrate that the FFA method can localize all types of brain activity more accurately than the other approaches with limited data.

  7. Visual SLAM Using Variance Grid Maps

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  8. Effectiveness of Losartan-Loaded Hyaluronic Acid (HA) Micelles for the Reduction of Advanced Hepatic Fibrosis in C3H/HeN Mice Model

    PubMed Central

    Thomas, Reju George; Moon, Myeong Ju; Kim, Jo Heon; Lee, Jae Hyuk; Jeong, Yong Yeon

    2015-01-01

    Advanced hepatic fibrosis therapy using drug-delivering nanoparticles is a relatively unexplored area. Angiotensin type 1 (AT1) receptor blockers such as losartan can be delivered to hepatic stellate cells (HSC), blocking their activation and thereby reducing fibrosis progression in the liver. In our study, we analyzed the possibility of utilizing drug-loaded vehicles such as hyaluronic acid (HA) micelles carrying losartan to attenuate HSC activation. Losartan, which exhibits inherent lipophilicity, was loaded into the hydrophobic core of HA micelles with a 19.5% drug loading efficiency. An advanced liver fibrosis model was developed using C3H/HeN mice subjected to 20 weeks of prolonged TAA/ethanol weight-adapted treatment. The cytocompatibility and cell uptake profile of losartan-HA micelles were studied in murine fibroblast cells (NIH3T3), human hepatic stellate cells (hHSC) and FL83B cells (hepatocyte cell line). The ability of these nanoparticles to attenuate HSC activation was studied in activated HSC cells based on alpha smooth muscle actin (α-sma) expression. Mice treated with oral losartan or losartan-HA micelles were analyzed for serum enzyme levels (ALT/AST, CK and LDH) and collagen deposition (hydroxyproline levels) in the liver. The accumulation of HA micelles was observed in fibrotic livers, which suggests increased delivery of losartan compared to normal livers and specific uptake by HSC. Active reduction of α-sma was observed in hHSC and the liver sections of losartan-HA micelle-treated mice. The serum enzyme levels and collagen deposition of losartan-HA micelle-treated mice was reduced significantly compared to the oral losartan group. Losartan-HA micelles demonstrated significant attenuation of hepatic fibrosis via an HSC-targeting mechanism in our in vitro and in vivo studies. These nanoparticles can be considered as an alternative therapy for liver fibrosis. PMID:26714035

  9. Effectiveness of Losartan-Loaded Hyaluronic Acid (HA) Micelles for the Reduction of Advanced Hepatic Fibrosis in C3H/HeN Mice Model.

    PubMed

    Thomas, Reju George; Moon, Myeong Ju; Kim, Jo Heon; Lee, Jae Hyuk; Jeong, Yong Yeon

    2015-01-01

    Advanced hepatic fibrosis therapy using drug-delivering nanoparticles is a relatively unexplored area. Angiotensin type 1 (AT1) receptor blockers such as losartan can be delivered to hepatic stellate cells (HSC), blocking their activation and thereby reducing fibrosis progression in the liver. In our study, we analyzed the possibility of utilizing drug-loaded vehicles such as hyaluronic acid (HA) micelles carrying losartan to attenuate HSC activation. Losartan, which exhibits inherent lipophilicity, was loaded into the hydrophobic core of HA micelles with a 19.5% drug loading efficiency. An advanced liver fibrosis model was developed using C3H/HeN mice subjected to 20 weeks of prolonged TAA/ethanol weight-adapted treatment. The cytocompatibility and cell uptake profile of losartan-HA micelles were studied in murine fibroblast cells (NIH3T3), human hepatic stellate cells (hHSC) and FL83B cells (hepatocyte cell line). The ability of these nanoparticles to attenuate HSC activation was studied in activated HSC cells based on alpha smooth muscle actin (α-sma) expression. Mice treated with oral losartan or losartan-HA micelles were analyzed for serum enzyme levels (ALT/AST, CK and LDH) and collagen deposition (hydroxyproline levels) in the liver. The accumulation of HA micelles was observed in fibrotic livers, which suggests increased delivery of losartan compared to normal livers and specific uptake by HSC. Active reduction of α-sma was observed in hHSC and the liver sections of losartan-HA micelle-treated mice. The serum enzyme levels and collagen deposition of losartan-HA micelle-treated mice was reduced significantly compared to the oral losartan group. Losartan-HA micelles demonstrated significant attenuation of hepatic fibrosis via an HSC-targeting mechanism in our in vitro and in vivo studies. These nanoparticles can be considered as an alternative therapy for liver fibrosis.

  10. Uncovering hidden variance: pair-wise SNP analysis accounts for additional variance in nicotine dependence

    PubMed Central

    Culverhouse, Robert C.; Saccone, Nancy L.; Stitzel, Jerry A.; Wang, Jen C.; Steinbach, Joseph H.; Goate, Alison M.; Schwantes-An, Tae-Hwi; Grucza, Richard A.; Stevens, Victoria L.; Bierut, Laura J.

    2010-01-01

    Results from genome-wide association studies of complex traits account for only a modest proportion of the trait variance predicted to be due to genetics. We hypothesize that joint analysis of polymorphisms may account for more variance. We evaluated this hypothesis on a case–control smoking phenotype by examining pairs of nicotinic receptor single-nucleotide polymorphisms (SNPs) using the Restricted Partition Method (RPM) on data from the Collaborative Genetic Study of Nicotine Dependence (COGEND). We found evidence of joint effects that increase explained variance. Four signals identified in COGEND were testable in independent American Cancer Society (ACS) data, and three of the four signals replicated. Our results highlight two important lessons: joint effects that increase the explained variance are not limited to loci displaying substantial main effects, and joint effects need not display a significant interaction term in a logistic regression model. These results suggest that the joint analyses of variants may indeed account for part of the genetic variance left unexplained by single SNP analyses. Methodologies that limit analyses of joint effects to variants that demonstrate association in single SNP analyses, or require a significant interaction term, will likely miss important joint effects. PMID:21079997

  11. Degradation of vinyl chloride (VC) by the sulfite/UV advanced reduction process (ARP): effects of process variables and a kinetic model.

    PubMed

    Liu, Xu; Yoon, Sunhee; Batchelor, Bill; Abdel-Wahab, Ahmed

    2013-06-01

    Vinyl chloride (VC) poses a threat to humans and environment due to its toxicity and carcinogenicity. In this study, an advanced reduction process (ARP) that combines sulfite with UV light was developed to destroy VC. The degradation of VC followed pseudo-first-order decay kinetics and the effects of several experimental factors on the degradation rate constant were investigated. The largest rate constant was observed at pH9, but complete dechlorination was obtained at pH11. Higher sulfite dose and light intensity were found to increase the rate constant linearly. The rate constant had a little drop when the initial VC concentration was below 1.5mg/L and then was approximately constant between 1.5mg/L and 3.1mg/L. A degradation mechanism was proposed to describe reactions between VC and the reactive species that were produced by the photolysis of sulfite. A kinetic model that described major reactions in the system was developed and was able to explain the dependence of the rate constant on the experimental factors examined. This study may provide a new treatment technology for the removal of a variety of halogenated contaminants.

  12. Investigation into cyclic utilization of carbon source in an advanced sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal (SIPER) wastewater treatment process.

    PubMed

    Yan, Peng; Ji, Fang-Ying; Wang, Jing; Chen, You-Peng; Shen, Yu; Fang, Fang; Guo, Jin-Song

    2015-01-01

    An advanced wastewater treatment process (SIPER) was developed to simultaneously reduce sludge production, prevent the accumulation of inorganic solids, recover phosphorus, and enhance nutrient removal. The ability to recover organic substance from excess sludge to enhance nutrient removal (especially nitrogen) and its performance as a C-source were evaluated in this study. The chemical oxygen demand/total nitrogen (COD/TN) and volatile fatty acids/total phosphorus (VFA/TP) ratios for the supernatant of the alkaline-treated sludge were 3.1 times and 2.7 times those of the influent, respectively. The biodegradability of the supernatant was much better than that of the influent. The system COD was increased by 91 mg/L, and nitrogen removal was improved by 19.6% (the removal rate for TN reached 80.4%) after the return of the alkaline-treated sludge as an internal C-source. The C-source recovered from the excess sludge was successfully used to enhance nitrogen removal. The internal C-source contributed 24.1% of the total C-source, and the cyclic utilization of the system C-source was achieved by recirculation of alkaline-treated sludge in the sludge reduction, inorganic solids separation, phosphorus recovery (SIPER) process.

  13. Enhanced nitrogen and phosphorus removal by an advanced simultaneous sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal wastewater treatment process.

    PubMed

    Yan, Peng; Guo, Jin-Song; Wang, Jing; Chen, You-Peng; Ji, Fang-Ying; Dong, Yang; Zhang, Hong; Ouyang, Wen-juan

    2015-05-01

    An advanced wastewater treatment process (SIPER) was developed to simultaneously decrease sludge production, prevent the accumulation of inorganic solids, recover phosphorus, and enhance nutrient removal. The feasibility of simultaneous enhanced nutrient removal along with sludge reduction as well as the potential for enhanced nutrient removal via this process were further evaluated. The results showed that the denitrification potential of the supernatant of alkaline-treated sludge was higher than that of the influent. The system COD and VFA were increased by 23.0% and 68.2%, respectively, after the return of alkaline-treated sludge as an internal C-source, and the internal C-source contributed 24.1% of the total C-source. A total of 74.5% of phosphorus from wastewater was recovered as a usable chemical crystalline product. The nitrogen and phosphorus removal were improved by 19.6% and 23.6%, respectively, after incorporation of the side-stream system. Sludge minimization and excellent nutrient removal were successfully coupled in the SIPER process.

  14. Investigation into cyclic utilization of carbon source in an advanced sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal (SIPER) wastewater treatment process.

    PubMed

    Yan, Peng; Ji, Fang-Ying; Wang, Jing; Chen, You-Peng; Shen, Yu; Fang, Fang; Guo, Jin-Song

    2015-01-01

    An advanced wastewater treatment process (SIPER) was developed to simultaneously reduce sludge production, prevent the accumulation of inorganic solids, recover phosphorus, and enhance nutrient removal. The ability to recover organic substance from excess sludge to enhance nutrient removal (especially nitrogen) and its performance as a C-source were evaluated in this study. The chemical oxygen demand/total nitrogen (COD/TN) and volatile fatty acids/total phosphorus (VFA/TP) ratios for the supernatant of the alkaline-treated sludge were 3.1 times and 2.7 times those of the influent, respectively. The biodegradability of the supernatant was much better than that of the influent. The system COD was increased by 91 mg/L, and nitrogen removal was improved by 19.6% (the removal rate for TN reached 80.4%) after the return of the alkaline-treated sludge as an internal C-source. The C-source recovered from the excess sludge was successfully used to enhance nitrogen removal. The internal C-source contributed 24.1% of the total C-source, and the cyclic utilization of the system C-source was achieved by recirculation of alkaline-treated sludge in the sludge reduction, inorganic solids separation, phosphorus recovery (SIPER) process. PMID:26524455

  15. Mindfulness-Based Stress Reduction in Advanced Nursing Practice: A Nonpharmacologic Approach to Health Promotion, Chronic Disease Management, and Symptom Control.

    PubMed

    Williams, Hants; Simmons, Leigh Ann; Tanabe, Paula

    2015-09-01

    The aim of this article is to discuss how advanced practice nurses (APNs) can incorporate mindfulness-based stress reduction (MBSR) as a nonpharmacologic clinical tool in their practice. Over the last 30 years, patients and providers have increasingly used complementary and holistic therapies for the nonpharmacologic management of acute and chronic diseases. Mindfulness-based interventions, specifically MBSR, have been tested and applied within a variety of patient populations. There is strong evidence to support that the use of MBSR can improve a range of biological and psychological outcomes in a variety of medical illnesses, including acute and chronic pain, hypertension, and disease prevention. This article will review the many ways APNs can incorporate MBSR approaches for health promotion and disease/symptom management into their practice. We conclude with a discussion of how nurses can obtain training and certification in MBSR. Given the significant and growing literature supporting the use of MBSR in the prevention and treatment of chronic disease, increased attention on how APNs can incorporate MBSR into clinical practice is necessary.

  16. Derivation of the Data Reduction Equations for the Calibration of the Six-component Thrust Stand in the CE-22 Advanced Nozzle Test Facility

    NASA Technical Reports Server (NTRS)

    Wong, Kin C.

    2003-01-01

    This paper documents the derivation of the data reduction equations for the calibration of the six-component thrust stand located in the CE-22 Advanced Nozzle Test Facility. The purpose of the calibration is to determine the first-order interactions between the axial, lateral, and vertical load cells (second-order interactions are assumed to be negligible). In an ideal system, the measurements made by the thrust stand along the three coordinate axes should be independent. For example, when a test article applies an axial force on the thrust stand, the axial load cells should measure the full magnitude of the force, while the off-axis load cells (lateral and vertical) should read zero. Likewise, if a lateral force is applied, the lateral load cells should measure the entire force, while the axial and vertical load cells should read zero. However, in real-world systems, there may be interactions between the load cells. Through proper design of the thrust stand, these interactions can be minimized, but are hard to eliminate entirely. Therefore, the purpose of the thrust stand calibration is to account for these interactions, so that necessary corrections can be made during testing. These corrections can be expressed in the form of an interaction matrix, and this paper shows the derivation of the equations used to obtain the coefficients in this matrix.

  17. Calculating bone-lead measurement variance.

    PubMed Central

    Todd, A C

    2000-01-01

    The technique of (109)Cd-based X-ray fluorescence (XRF) measurements of lead in bone is well established. A paper by some XRF researchers [Gordon CL, et al. The Reproducibility of (109)Cd-based X-ray Fluorescence Measurements of Bone Lead. Environ Health Perspect 102:690-694 (1994)] presented the currently practiced method for calculating the variance of an in vivo measurement once a calibration line has been established. This paper corrects typographical errors in the method published by those authors; presents a crude estimate of the measurement error that can be acquired without computational peak fitting programs; and draws attention to the measurement error attributable to covariance, an important feature in the construct of the currently accepted method that is flawed under certain circumstances. PMID:10811562

  18. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... perform UR within the time requirements for which the variance is requested and its good faith efforts...

  19. Dynamics of mean-variance-skewness of cumulative crop yield impact temporal yield variance

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Production risk associated with cropping systems influences farmers’ decisions to adopt a new management practice or a production system. Cumulative yield (CY), temporal yield variance (TYV) and coefficient of variation (CV) were used to assess the risk associated with adopting combinations of new m...

  20. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    PubMed

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523

  1. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    PubMed

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.

  2. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. PMID:26995641

  3. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.

  4. Measuring past changes in ENSO variance using Mg/Ca measurements on individual planktic foraminifera

    NASA Astrophysics Data System (ADS)

    Marchitto, T. M.; Grist, H. R.; van Geen, A.

    2013-12-01

    Previous work in Soledad Basin, located off Baja California Sur in the eastern subtropical Pacific, supports a La Niña-like mean-state response to enhanced radiative forcing at both orbital and millennial (solar) timescales during the Holocene. Mg/Ca measurements on the planktic foraminifer Globigerina bulloides indicate cooling when insolation is higher, consistent with an ';ocean dynamical thermostat' response that shoals the thermocline and cools the surface in the eastern tropical Pacific. Some, but not all, numerical models simulate reduced ENSO variance (less frequent and/or less intense events) when the Pacific is driven into a La Niña-like mean state by radiative forcing. Hypothetically the question of ENSO variance can be examined by measuring individual planktic foraminiferal tests from within a sample interval. Koutavas et al. (2006) used d18O on single specimens of Globigerinoides ruber from the eastern equatorial Pacific to demonstrate a 50% reduction in variance at ~6 ka compared to ~2 ka, consistent with the sense of the model predictions at the orbital scale. Here we adapt this approach to Mg/Ca and apply it to the millennial-scale question. We present Mg/Ca measured on single specimens of G. bulloides (cold season) and G. ruber (warm season) from three time slices in Soledad Basin: the 20th century, the warm interval (and solar low) at 9.3 ka, and the cold interval (and solar high) at 9.8 ka. Each interval is uniformly sampled over a ~100-yr (~10-cm or more) window to ensure that our variance estimate is not biased by decadal-scale stochastic variability. Theoretically we can distinguish between changing ENSO variability and changing seasonality: a reduction in ENSO variance would result in narrowing of both the G. bulloides and G. ruber temperature distributions without necessarily changing the distance between their two medians; while a reduction in seasonality would cause the two species' distributions to move closer together.

  5. Cyclostationary analysis with logarithmic variance stabilisation

    NASA Astrophysics Data System (ADS)

    Borghesani, Pietro; Shahriar, Md Rifat

    2016-03-01

    Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

  6. Automatic variance analysis of multistage care pathways.

    PubMed

    Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T

    2014-01-01

    A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280

  7. Correcting an analysis of variance for clustering.

    PubMed

    Hedges, Larry V; Rhoads, Christopher H

    2011-02-01

    A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.

  8. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  9. Variance analysis. Part II, The use of computers.

    PubMed

    Finkler, S A

    1991-09-01

    This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788

  10. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    SciTech Connect

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  11. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  12. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...

  13. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    PubMed

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  14. Estimation of Variance Components of Quantitative Traits in Inbred Populations

    PubMed Central

    Abney, Mark; McPeek, Mary Sara; Ober, Carole

    2000-01-01

    Summary Use of variance-component estimation for mapping of quantitative-trait loci in humans is a subject of great current interest. When only trait values, not genotypic information, are considered, variance-component estimation can also be used to estimate heritability of a quantitative trait. Inbred pedigrees present special challenges for variance-component estimation. First, there are more variance components to be estimated in the inbred case, even for a relatively simple model including additive, dominance, and environmental effects. Second, more identity coefficients need to be calculated from an inbred pedigree in order to perform the estimation, and these are computationally more difficult to obtain in the inbred than in the outbred case. As a result, inbreeding effects have generally been ignored in practice. We describe here the calculation of identity coefficients and estimation of variance components of quantitative traits in large inbred pedigrees, using the example of HDL in the Hutterites. We use a multivariate normal model for the genetic effects, extending the central-limit theorem of Lange to allow for both inbreeding and dominance under the assumptions of our variance-component model. We use simulated examples to give an indication of under what conditions one has the power to detect the additional variance components and to examine their impact on variance-component estimation. We discuss the implications for mapping and heritability estimation by use of variance components in inbred populations. PMID:10677322

  15. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    PubMed

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  16. The phenotypic variance gradient – a novel concept

    PubMed Central

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-01-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely “a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added”. This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a “phenotypic variance gradient”, are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization. PMID:25540685

  17. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Public design report (preliminary and final)

    SciTech Connect

    1996-07-01

    This Public Design Report presents the design criteria of a DOE Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of NO{sub x} emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 (500 MW) near Rome, Georgia. The technologies being demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NO{sub x} burner. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NO{sub x} burners, advanced overfire systems, and digital control system.

  18. Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System

    NASA Astrophysics Data System (ADS)

    Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.

    2016-06-01

    Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading

  19. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO[sub x]) emissions from coal-fired boilers

    SciTech Connect

    Not Available

    1992-08-24

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (No[sub x]) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO[sub x] combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO[sub x] reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO[sub x] burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO[sub x] reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.

  20. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, Second quarter 1992

    SciTech Connect

    Not Available

    1992-08-24

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (No{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency.

  1. EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation

    SciTech Connect

    Fan, Xuetong

    2000-08-20

    Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.

  2. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    SciTech Connect

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    2014-06-15

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment

  3. Analysis of variance of designed chromatographic data sets: The analysis of variance-target projection approach.

    PubMed

    Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata

    2015-07-31

    Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.

  4. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...

  5. Characterizing the evolution of genetic variance using genetic covariance tensors.

    PubMed

    Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W

    2009-06-12

    Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.

  6. An Analysis of Variance Framework for Matrix Sampling.

    ERIC Educational Resources Information Center

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  7. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  8. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  9. Conceptual Complexity and the Bias/Variance Tradeoff

    ERIC Educational Resources Information Center

    Briscoe, Erica; Feldman, Jacob

    2011-01-01

    In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…

  10. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE...

  11. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the Williams... the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from a standard... the Williams-Steiger Occupational Safety and Health Act of 1970. In accordance with the...

  12. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  13. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  14. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  15. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  16. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  17. Evaluation of Mean and Variance Integrals without Integration

    ERIC Educational Resources Information Center

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  18. Productive Failure in Learning the Concept of Variance

    ERIC Educational Resources Information Center

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  19. A Computer Program to Determine Reliability Using Analysis of Variance

    ERIC Educational Resources Information Center

    Burns, Edward

    1976-01-01

    A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)

  20. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  1. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  2. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  3. Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.

  4. A Variance Explanation Paradox: When a Little Is a Lot.

    ERIC Educational Resources Information Center

    Abelson, Robert P.

    1985-01-01

    Argues that percent variance explanation is a misleading index of the influence of systematic factors in cases where there are processes by which individually tiny influences cumulate to produce meaningful outcomes. An example is the computation of percentage of variance in batting performance among major league baseball players. (Author/CB)

  5. On the Endogeneity of the Mean-Variance Efficient Frontier.

    ERIC Educational Resources Information Center

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  6. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. First quarterly technical progress report, [January--March 1995

    SciTech Connect

    1995-12-31

    The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. The project provides a stepwise evaluation of the following NO{sub x} reduction technologies: Advanced overfire air (AOFA), Low NO{sub x} burners (LNB), LNB with AOFA, and Advanced Digital Controls and Optimization Strategies. The project has completed the baseline, AOFA, LNB, and LNB+AOFA test segments, fulfilling all testing originally proposed to DOE. Analysis of the LNB long-term data collected show the full load NO{sub x} emission levels to be near 0.65 lb/MBtu. This NO{sub x} level represents a 48 percent reduction when compared to the baseline, full load value of 1. 24 lb/MBtu. These reductions were sustainable over the long-term test period and were consistent over the entire load range. Full load, fly ash LOI values in the LNB configuration were near 8 percent compared to 5 percent for baseline. Results from the LNB+AOFA phase indicate that full load NO{sub x} emissions are approximately 0.40 lb/MBtu with a corresponding fly ash LOI value of near 8 percent. Although this NO{sub x} level represents a 67 percent reduction from baseline levels, a substantial portion of the incremental change in NO{sub x} emissions between the LNB and LNB+AOFA configurations was the result of operational changes and not the result of the AOFA system. Phase 4 of the project is in progress. During first quarter 1995, design of the advanced control and optimization software and strategies continued. Process data collected from the DCS is being archived to a server on the plant information network and subsequently transferred to SCS offices in Birmingham for analysis and use in training the neural network combustion models.

  7. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  8. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  9. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  10. Variance After-Effects Distort Risk Perception in Humans.

    PubMed

    Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel

    2016-06-01

    In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500

  11. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for...

  12. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for...

  13. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  14. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  15. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  16. Wavelet variance analysis for random fields on a regular lattice.

    PubMed

    Mondal, Debashis; Percival, Donald B

    2012-02-01

    There has been considerable recent interest in using wavelets to analyze time series and images that can be regarded as realizations of certain 1-D and 2-D stochastic processes on a regular lattice. Wavelets give rise to the concept of the wavelet variance (or wavelet power spectrum), which decomposes the variance of a stochastic process on a scale-by-scale basis. The wavelet variance has been applied to a variety of time series, and a statistical theory for estimators of this variance has been developed. While there have been applications of the wavelet variance in the 2-D context (in particular, in works by Unser in 1995 on wavelet-based texture analysis for images and by Lark and Webster in 2004 on analysis of soil properties), a formal statistical theory for such analysis has been lacking. In this paper, we develop the statistical theory by generalizing and extending some of the approaches developed for time series, thus leading to a large-sample theory for estimators of 2-D wavelet variances. We apply our theory to simulated data from Gaussian random fields with exponential covariances and from fractional Brownian surfaces. We demonstrate that the wavelet variance is potentially useful for texture discrimination. We also use our methodology to analyze images of four types of clouds observed over the southeast Pacific Ocean.

  17. High-fidelity Simulation of Jet Noise from Rectangular Nozzles . [Large Eddy Simulation (LES) Model for Noise Reduction in Advanced Jet Engines and Automobiles

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj

    2014-01-01

    This Phase II project validated a state-of-the-art LES model, coupled with a Ffowcs Williams-Hawkings (FW-H) far-field acoustic solver, to support the development of advanced engine concepts. These concepts include innovative flow control strategies to attenuate jet noise emissions. The end-to-end LES/ FW-H noise prediction model was demonstrated and validated by applying it to rectangular nozzle designs with a high aspect ratio. The model also was validated against acoustic and flow-field data from a realistic jet-pylon experiment, thereby significantly advancing the state of the art for LES.

  18. Space Launch System (SLS) Program Overview NASA Research Announcement (NRA) Advanced Booster (AB) Engineering Demonstration and Risk Reduction (EDRR) Industry Day

    NASA Technical Reports Server (NTRS)

    May, Todd A.

    2011-01-01

    SLS is a national capability that empowers entirely new exploration for missions of national importance. Program key tenets are safety, affordability, and sustainability. SLS builds on a solid foundation of experience and current capacities to enable a timely initial capability and evolve to a flexible heavy-lift capability through competitive opportunities: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability and performance The road ahead promises to be an exciting journey for present and future generations, and we look forward to working with you to continue America fs space exploration.

  19. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  20. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    PubMed

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  1. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  2. Variance Function Partially Linear Single-Index Models1

    PubMed Central

    LIAN, HENG; LIANG, HUA; CARROLL, RAYMOND J.

    2014-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function. PMID:25642139

  3. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified in... interest, and (b) Information is promptly made a matter of public record delineating the nature of...

  4. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  5. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  6. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  7. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  8. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  9. The evolution and consequences of sex-specific reproductive variance.

    PubMed

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.

  10. On variance estimate for covariate adjustment by propensity score analysis.

    PubMed

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  11. Variance and covariance estimates for weaning weight of Senepol cattle.

    PubMed

    Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S

    1991-10-01

    Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively. PMID:1778806

  12. Pilot-scale test of an advanced, integrated wastewater treatment process with sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal (SIPER).

    PubMed

    Yan, Peng; Ji, Fangying; Wang, Jing; Fan, Jianping; Guan, Wei; Chen, Qingkong

    2013-08-01

    Sludge reduction technologies are increasingly important in wastewater treatment, but have some defects. In order to remedy them, a novel, integrated process including sludge reduction, inorganic solids separation, phosphorus recovery, and enhanced nutrient removal was developed. The pilot-scale system was operated steadily at a treatment scale of 10 m(3)/d for 90 days. The results showed excellent nutrient removal, with average removal efficiencies for NH4(+)-N, TN, TP, and COD reaching 98.2 ± 1.34%, 75.5 ± 3.46%, 95.3 ± 1.65%, and 92.7 ± 2.49%, respectively. The ratio of mixed liquor volatile suspended solids (MLVSS) to mixed liquor suspended solids (MLSS) in the system gradually increased, from 0.33 to 0.52. The process effectively prevented the accumulation of inert or inorganic solids in activated sludge. Phosphorus was recovered as a crystalline product with aluminum ion from wastewater. The observed sludge yield Yobs of the system was 0.103 gVSS/g COD, demonstrating that the system's sludge reduction potential is excellent.

  13. Effect of advanced aircraft noise reduction technology on the 1990 projected noise environment around Patrick Henry Airport. [development of noise exposure forecast contours for projected traffic volume and aircraft types

    NASA Technical Reports Server (NTRS)

    Cawthorn, J. M.; Brown, C. G.

    1974-01-01

    A study has been conducted of the future noise environment of Patric Henry Airport and its neighboring communities projected for the year 1990. An assessment was made of the impact of advanced noise reduction technologies which are currently being considered. These advanced technologies include a two-segment landing approach procedure and aircraft hardware modifications or retrofits which would add sound absorbent material in the nacelles of the engines or which would replace the present two- and three-stage fans with a single-stage fan of larger diameter. Noise Exposure Forecast (NEF) contours were computed for the baseline (nonretrofitted) aircraft for the projected traffic volume and fleet mix for the year 1990. These NEF contours are presented along with contours for a variety of retrofit options. Comparisons of the baseline with the noise reduction options are given in terms of total land area exposed to 30 and 40 NEF levels. Results are also presented of the effects on noise exposure area of the total number of daily operations.

  14. Innovative clean coal technology: 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Final report, Phases 1 - 3B

    SciTech Connect

    1998-01-01

    This report presents the results of a U.S. Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project was conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The technologies demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NOx burner. The primary objective of the demonstration at Hammond Unit 4 was to determine the long-term effects of commercially available wall-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology were also performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications was established for the project. Short-term and long-term baseline testing was conducted in an {open_quotes}as-found{close_quotes} condition from November 1989 through March 1990. Following retrofit of the AOFA system during a four-week outage in spring 1990, the AOFA configuration was tested from August 1990 through March 1991. The FWEC CF/SF low NOx burners were then installed during a seven-week outage starting on March 8, 1991 and continuing to May 5, 1991. Following optimization of the LNBs and ancillary combustion equipment by FWEC personnel, LNB testing commenced during July 1991 and continued until January 1992. Testing in the LNB+AOFA configuration was completed during August 1993. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NOx burners and advanced overfire systems.

  15. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report: First quarter 1993

    SciTech Connect

    Not Available

    1993-12-31

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. During this quarter, long-term testing of the LNB + AOFA configuration continued and no parametric testing was performed. Further full-load optimization of the LNB + AOFA system began on March 30, 1993. Following completion of this optimization, comprehensive testing in this configuration will be performed including diagnostic, performance, verification, long-term, and chemical emissions testing. These tests are scheduled to start in May 1993 and continue through August 1993. Preliminary engineering and procurement are progressing on the Advanced Low NOx Digital Controls scope addition to the wall-fired project. The primary activities during this quarter include (1) refinement of the input/output lists, (2) procurement of the distributed digital control system, (3) configuration training, and (4) revision of schedule to accommodate project approval cycle and change in unit outage dates.

  16. Combining multivariate statistics and analysis of variance to redesign a water quality monitoring network.

    PubMed

    Guigues, Nathalie; Desenfant, Michèle; Hance, Emmanuel

    2013-09-01

    The objective of this paper was to demonstrate how multivariate statistics combined with the analysis of variance could support decision-making during the process of redesigning a water quality monitoring network with highly heterogeneous datasets in terms of time and space. Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) were selected to optimise the selection of water quality parameters to be monitored as well as the number and location of monitoring stations. Sampling frequency was specifically investigated through the analysis of variance. The data used were obtained between 2007 and 2010 at the Long-term Environmental Research Monitoring and Testing System (OPE) located in the north-eastern part of France in relation with a geological disposal of radioactive waste project. PCA results showed that no substantial reduction among the parameters was possible as strong correlation only exists between electrical conductivity, calcium or bicarbonates. HCA results were geospatially represented for each field campaign and compared to one another in terms of similarities and differences allowing us to group the monitoring stations into 12 categories. This approach enabled us to take into account not only the spatial variability of water quality but also its temporal variability. Finally, the analysis of variances showed that three very different behaviours occurred: parameters with high temporal variability and low spatial variability (e.g. suspended matter), parameters with high spatial variability and average temporal variability (e.g. calcium) and finally parameters with both high temporal and spatial variability (e.g. nitrate).

  17. Increased circulating VCAM-1 correlates with advanced disease and poor survival in patients with multiple myeloma: reduction by post-bortezomib and lenalidomide treatment

    PubMed Central

    Terpos, E; Migkou, M; Christoulas, D; Gavriatopoulou, M; Eleutherakis-Papaiakovou, E; Kanellias, N; Iakovaki, M; Panagiotidis, I; Ziogas, D C; Fotiou, D; Kastritis, E; Dimopoulos, M A

    2016-01-01

    Circulating vascular cell adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 (ICAM-1) and selectins were prospectively measured in 145 newly-diagnosed patients with symptomatic myeloma (NDMM), 61 patients with asymptomatic/smoldering myeloma (SMM), 47 with monoclonal gammopathy of undetermined significance (MGUS) and 87 multiple myeloma (MM) patients at first relapse who received lenalidomide- or bortezomib-based treatment (RD, n=47; or VD, n=40). Patients with NDMM had increased VCAM-1 and ICAM-1 compared with MGUS and SMM patients. Elevated VCAM-1 correlated with ISS-3 and was independently associated with inferior overall survival (OS) (45 months for patients with VCAM-1 >median vs 75 months, P=0.001). MM patients at first relapse had increased levels of ICAM-1 and L-selectin, even compared with NDMM patients and had increased levels of VCAM-1 compared with MGUS and SMM. Both VD and RD reduced dramatically serum VCAM-1 after four cycles of therapy, but only VD reduced serum ICAM-1, irrespective of response to therapy. The reduction of VCAM-1 was more pronounced after RD than after VD. Our study provides evidence for the prognostic value of VCAM-1 in myeloma patients, suggesting that VCAM-1 could be a suitable target for the development of anti-myeloma therapies. Furthermore, the reduction of VCAM-1 and ICAM-1 by RD and VD supports the inhibitory effect of these drugs on the adhesion of MM cells to stromal cells. PMID:27232930

  18. Increased circulating VCAM-1 correlates with advanced disease and poor survival in patients with multiple myeloma: reduction by post-bortezomib and lenalidomide treatment.

    PubMed

    Terpos, E; Migkou, M; Christoulas, D; Gavriatopoulou, M; Eleutherakis-Papaiakovou, E; Kanellias, N; Iakovaki, M; Panagiotidis, I; Ziogas, D C; Fotiou, D; Kastritis, E; Dimopoulos, M A

    2016-01-01

    Circulating vascular cell adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 (ICAM-1) and selectins were prospectively measured in 145 newly-diagnosed patients with symptomatic myeloma (NDMM), 61 patients with asymptomatic/smoldering myeloma (SMM), 47 with monoclonal gammopathy of undetermined significance (MGUS) and 87 multiple myeloma (MM) patients at first relapse who received lenalidomide- or bortezomib-based treatment (RD, n=47; or VD, n=40). Patients with NDMM had increased VCAM-1 and ICAM-1 compared with MGUS and SMM patients. Elevated VCAM-1 correlated with ISS-3 and was independently associated with inferior overall survival (OS) (45 months for patients with VCAM-1 >median vs 75 months, P=0.001). MM patients at first relapse had increased levels of ICAM-1 and L-selectin, even compared with NDMM patients and had increased levels of VCAM-1 compared with MGUS and SMM. Both VD and RD reduced dramatically serum VCAM-1 after four cycles of therapy, but only VD reduced serum ICAM-1, irrespective of response to therapy. The reduction of VCAM-1 was more pronounced after RD than after VD. Our study provides evidence for the prognostic value of VCAM-1 in myeloma patients, suggesting that VCAM-1 could be a suitable target for the development of anti-myeloma therapies. Furthermore, the reduction of VCAM-1 and ICAM-1 by RD and VD supports the inhibitory effect of these drugs on the adhesion of MM cells to stromal cells. PMID:27232930

  19. Detecting pulsars with interstellar scintillation in variance images

    NASA Astrophysics Data System (ADS)

    Dai, S.; Johnston, S.; Bell, M. E.; Coles, W. A.; Hobbs, G.; Ekers, R. D.; Lenc, E.

    2016-11-01

    Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximize the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show that variance images can indeed lead to the detection of pulsars by distinguishing them from other radio sources.

  20. Increased spatial variance accompanies reorganization of two continental shelf ecosystems.

    PubMed

    Litzow, Michael A; Urban, J Daniel; Laurel, Benjamin J

    2008-09-01

    Phase transitions between alternate stable states in marine ecosystems lead to disruptive changes in ecosystem services, especially fisheries productivity. We used trawl survey data spanning phase transitions in the North Pacific (Gulf of Alaska) and the North Atlantic (Scotian Shelf) to test for increases in ecosystem variability that might provide early warning of such transitions. In both time series, elevated spatial variability in a measure of community composition (ratio of cod [Gadus sp.] abundance to prey abundance) accompanied transitions between ecosystem states, and variability was negatively correlated with distance from the ecosystem transition point. In the Gulf of Alaska, where the phase transition was apparently the result of a sudden perturbation (climate regime shift), variance increased one year before the transition in mean state occurred. On the Scotian Shelf, where ecosystem reorganization was the result of persistent overfishing, a significant increase in variance occurred three years before the transition in mean state was detected. However, we could not reject the alternate explanation that increased variance may also have simply been inherent to the final stable state in that ecosystem. Increased variance has been previously observed around transition points in models, but rarely in real ecosystems, and our results demonstrate the possible management value in tracking the variance of key parameters in exploited ecosystems.

  1. Maximization of total genetic variance in breed conservation programmes.

    PubMed

    Cervantes, I; Meuwissen, T H E

    2011-12-01

    The preservation of the maximum genetic diversity in a population is one of the main objectives within a breed conservation programme. We applied the maximum variance total (MVT) method to a unique population in order to maximize the total genetic variance. The function maximization was performed by the annealing algorithm. We have selected the parents and the mating scheme at the same time simply maximizing the total genetic variance (a mate selection problem). The scenario was compared with a scenario of full-sib lines, a MVT scenario with a rate of inbreeding restriction, and with a minimum coancestry selection scenario. The MVT method produces sublines in a population attaining a similar scheme as the full-sib sublining that agrees with other authors that the maximum genetic diversity in a population (the lowest overall coancestry) is attained in the long term by subdividing it in as many isolated groups as possible. The application of a restriction on the rate of inbreeding jointly with the MVT method avoids the consequences of inbreeding depression and maintains the effective size at an acceptable minimum. The scenario of minimum coancestry selection gave higher effective size values, but a lower total genetic variance. A maximization of the total genetic variance ensures more genetic variation for extreme traits, which could be useful in case the population needs to adapt to a new environment/production system.

  2. Models of Postural Control: Shared Variance in Joint and COM Motions.

    PubMed

    Kilby, Melissa C; Molenaar, Peter C M; Newell, Karl M

    2015-01-01

    This paper investigated the organization of the postural control system in human upright stance. To this aim the shared variance between joint and 3D total body center of mass (COM) motions was analyzed using multivariate canonical correlation analysis (CCA). The CCA was performed as a function of established models of postural control that varied in their joint degrees of freedom (DOF), namely, an inverted pendulum ankle model (2DOF), ankle-hip model (4DOF), ankle-knee-hip model (5DOF), and ankle-knee-hip-neck model (7DOF). Healthy young adults performed various postural tasks (two-leg and one-leg quiet stances, voluntary AP and ML sway) on a foam and rigid surface of support. Based on CCA model selection procedures, the amount of shared variance between joint and 3D COM motions and the cross-loading patterns we provide direct evidence of the contribution of multi-DOF postural control mechanisms to human balance. The direct model fitting of CCA showed that incrementing the DOFs in the model through to 7DOF was associated with progressively enhanced shared variance with COM motion. In the 7DOF model, the first canonical function revealed more active involvement of all joints during more challenging one leg stances and dynamic posture tasks. Furthermore, the shared variance was enhanced during the dynamic posture conditions, consistent with a reduction of dimension. This set of outcomes shows directly the degeneracy of multivariate joint regulation in postural control that is influenced by stance and surface of support conditions.

  3. Impact of Damping Uncertainty on SEA Model Response Variance

    NASA Technical Reports Server (NTRS)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  4. The positioning algorithm based on feature variance of billet character

    NASA Astrophysics Data System (ADS)

    Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang

    2015-12-01

    In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.

  5. Saturation of number variance in embedded random-matrix ensembles.

    PubMed

    Prakash, Ravi; Pandey, Akhilesh

    2016-05-01

    We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths. PMID:27300898

  6. Enhancing area of review capabilities: Implementing a variance program

    SciTech Connect

    De Leon, F.

    1995-12-01

    The Railroad Commission of Texas (RRC) has regulated oil-field injection well operations since issuing its first injection permit in 1938. The Environmental Protection Agency (EPA) granted the RRC primary enforcement responsibility for the Class H Underground Injection Control (UIC) Program in April 1982. At that time, the added level of groundwater protection afforded by an Area of Review (AOR) on previously permitted Class H wells was not deemed necessary or cost effective. A proposed EPA rule change will require AORs to be performed on all pre-primacy Class II wells unless a variance can be justified. A variance methodology has been developed by researchers at the University of Missouri-Rolla in conjunction with the American Petroleum Institute (API). This paper will outline the RRC approach to implementing the AOR variance methodology. The RRC`s UIC program tracks 49,256 pre-primacy wells. Approximately 25,598 of these wells have active permits and will be subject to the proposed AOR requirements. The potential workload of performing AORs or granting variances for this many wells makes the development of a Geographic Information System (GIS) imperative. The RRC has recently completed a digitized map of the entire state and has spotted 890,000 of an estimated 1.2 million wells. Integrating this digital state map into a GIS will allow the RRC to tie its many data systems together. Once in place, this integrated data system will be used to evaluate AOR variances for pre-primacy wells on a field-wide basis. It will also reduce the regulatory cost of permitting by allowing the RRC staff to perform AORs or grant variances for the approximately 3,000 new and amended permit applications requiring AORs each year.

  7. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    PubMed

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P < .001) than the model assuming homogeneous variances for hot carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance

  8. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    PubMed

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.

  9. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    PubMed

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies. PMID:25965674

  10. Entropy, Fisher Information and Variance with Frost-Musulin Potenial

    NASA Astrophysics Data System (ADS)

    Idiodi, J. O. A.; Onate, C. A.

    2016-09-01

    This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.

  11. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    PubMed

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P < .001) than the model assuming homogeneous variances for hot carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance

  12. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    NASA Astrophysics Data System (ADS)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a "black-box", i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  13. Propulsion system noise reduction

    NASA Technical Reports Server (NTRS)

    Feiler, C. E.; Heidelberg, L. J.; Karchmer, A. M.; Lansing, D. L.; Miller, B. A.; Rice, E. J.

    1975-01-01

    The progress in propulsion system noise reduction is reviewed. The noise technology areas discussed include: fan noise; advances in suppression including conventional acoustic treatment, high Mach number inlets, and wing shielding; engine core noise; flap noise from both under-the-wing and over-the-wing powered-lift systems; supersonic jet noise suppression; and the NASA program in noise prediction.

  14. Advanced byproduct recovery: Direct catalytic reduction of SO{sub 2} to elemental sulfur. First quarterly technical progress report, [October--December 1995

    SciTech Connect

    Benedek, K.; Flytzani-Stephanopoulos, M.

    1996-02-01

    The team of Arthur D. Little, Tufts University and Engelhard Corporation will be conducting Phase I of a four and a half year, two-phase effort to develop and scale-up an advanced byproduct recovery technology that is a direct, single-stage, catalytic process for converting sulfur dioxide to elemental sulfur. this catalytic process reduces SO{sub 2} over a fluorite-type oxide (such as ceria or zirconia). The catalytic activity can be significantly promoted by active transition metals, such as copper. More than 95% elemental sulfur yield, corresponding to almost complete sulfur dioxide conversion, was obtained over a Cu-Ce-O oxide catalyst as part of an ongoing DOE-sponsored University Coal Research Program. This type of mixed metal oxide catalyst has stable activity, high selectivity for sulfur production, and is resistant to water and carbon dioxide poisoning. Tests with CO and CH{sub 4} reducing gases indicates that the catalyst has the potential for flexibility with regard to the composition of the reducing gas, making it attractive for utility use. the performance of the catalyst is consistently good over a range of SO{sub 2} inlet concentration (0.1 to 10%) indicating its flexibility in treating SO{sub 2} tail gases as well as high concentration streams.

  15. VOCs elimination and health risk reduction in e-waste dismantling workshop using integrated techniques of electrostatic precipitation with advanced oxidation technologies.

    PubMed

    Chen, Jiangyao; Huang, Yong; Li, Guiying; An, Taicheng; Hu, Yunkun; Li, Yunlu

    2016-01-25

    Volatile organic compounds (VOCs) emitted during the electronic waste dismantling process (EWDP) were treated at a pilot scale, using integrated electrostatic precipitation (EP)-advanced oxidation technologies (AOTs, subsequent photocatalysis (PC) and ozonation). Although no obvious alteration was seen in VOC concentration and composition, EP technology removed 47.2% of total suspended particles, greatly reducing the negative effect of particles on subsequent AOTs. After the AOT treatment, average removal efficiencies of 95.7%, 95.4%, 87.4%, and 97.5% were achieved for aromatic hydrocarbons, aliphatic hydrocarbons, halogenated hydrocarbons, as well as nitrogen- and oxygen-containing compounds, respectively, over 60-day treatment period. Furthermore, high elimination capacities were also seen using hybrid technique of PC with ozonation; this was due to the PC unit's high loading rates and excellent pre-treatment abilities, and the ozonation unit's high elimination capacity. In addition, the non-cancer and cancer risks, as well as the occupational exposure cancer risk, for workers exposed to emitted VOCs in workshop were reduced dramatically after the integrated technique treatment. Results demonstrated that the integrated technique led to highly efficient and stable VOC removal from EWDP emissions at a pilot scale. This study points to an efficient approach for atmospheric purification and improving human health in e-waste recycling regions.

  16. Facile synthesis of N-rich carbon quantum dots by spontaneous polymerization and incision of solvents as efficient bioimaging probes and advanced electrocatalysts for oxygen reduction reaction.

    PubMed

    Lei, Zhouyue; Xu, Shengjie; Wan, Jiaxun; Wu, Peiyi

    2016-01-28

    In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread commercial application prospects, not only as bioimaging probes but also as promising electrocatalysts for the metal-free ORR.

  17. Ultrasonic Waves and Strength Reduction Indexes for the Assessment of the Advancement of Deterioration Processes in Travertines from Pamukkale and Hierapolis (Turkey)

    NASA Astrophysics Data System (ADS)

    Bobrowska, Alicja; Domonik, Andrzej

    2015-09-01

    In constructions, the usefulness of modern technical diagnostics of stone as a raw material requires predicting the effects of long-term environmental impact of its qualities and geomechanical properties. The paper presents geomechanical research enabling presentation of the factors for strength loss of the stone and forecasting the rate of development of destructive phenomena on the stone structure on a long-time basis. As research material Turkish travertines were selected from the Denizli-Kaklık Basin (Pamukkale and Hierapolis quarries), which have been commonly used for centuries in global architecture. The rock material was subjected to testing of the impact of various environmental factors, as well as European standards recommended by the author of the research program. Their resistance to the crystallization of salts from aqueous solutions and the effects of SO2, as well as the effect of frost and high temperatures are presented. The studies allowed establishing the following quantitative indicators: the ultrasonic waves index (IVp) and the strength reduction index (IRc). Reflections on the assessment of deterioration effects indicate that the most active factors decreasing travertine resistance in the aging process include frost and sulphur dioxide (SO2). Their negative influence is particularly intense when the stone material is already strongly weathered.

  18. 40 CFR 124.64 - Appeals of variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...

  19. 40 CFR 124.64 - Appeals of variances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...

  20. 40 CFR 124.64 - Appeals of variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...

  1. The Variance of Intraclass Correlations in Three and Four Level

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.

    2012-01-01

    Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…

  2. Infinite variance in fermion quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  3. 40 CFR 124.62 - Decision on variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... section 301(i) based on delay in completion of a publicly owned treatment works; (2) After consultation... technology; or (3) Variances under CWA section 316(a) for thermal pollution. (b) The State Director may deny... 124.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...

  4. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... repair or rehabilitation of historic structures upon a determination that the proposed repair or rehabilitation will not preclude the structure's continued designation as a historic structure and the variance is the minimum necessary to preserve the historic character and design of the structure....

  5. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... or pathogenic contamination, a treatment lapse or deficiency, or a problem in the operation...

  6. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... or pathogenic contamination, a treatment lapse or deficiency, or a problem in the operation...

  7. Variance-based uncertainty relations for incompatible observables

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Cao, Ning-Ping; Fei, Shao-Ming; Long, Gui-Lu

    2016-09-01

    We formulate uncertainty relations for arbitrary finite number of incompatible observables. Based on the sum of variances of the observables, both Heisenberg-type and Schrödinger-type uncertainty relations are provided. These new lower bounds are stronger in most of the cases than the ones derived from some existing inequalities. Detailed examples are presented.

  8. Perspective projection for variance pose face recognition from camera calibration

    NASA Astrophysics Data System (ADS)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  9. Explaining Common Variance Shared by Early Numeracy and Literacy

    ERIC Educational Resources Information Center

    Davidse, N. J.; De Jong, M. T.; Bus, A. G.

    2014-01-01

    How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…

  10. Dominance, Information, and Hierarchical Scaling of Variance Space.

    ERIC Educational Resources Information Center

    Ceurvorst, Robert W.; Krus, David J.

    1979-01-01

    A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)

  11. Variances of Plane Parameters Fitted to Range Data.

    PubMed

    Franaszek, Marek

    2010-01-01

    Formulas for variances of plane parameters fitted with Nonlinear Least Squares to point clouds acquired by 3D imaging systems (e.g., LADAR) are derived. Two different error objective functions used in minimization are discussed: the orthogonal and the directional functions. Comparisons of corresponding formulas suggest the two functions can yield different results when applied to the same dataset.

  12. Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models

    NASA Technical Reports Server (NTRS)

    Yoder, Dennis A.

    2016-01-01

    In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.

  13. 21 CFR 821.2 - Exemptions and variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 821.2 Section 821.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL... unnecessary; (3) A complete description of alternative steps that are available, or that the petitioner...

  14. 21 CFR 898.14 - Exemptions and variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 898.14 Section 898.14 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... performance standard is unnecessary or unfeasible; (3) A complete description of alternative steps that...

  15. Intuitive Analysis of Variance-- A Formative Assessment Approach

    ERIC Educational Resources Information Center

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  16. GalaxyCount: Galaxy counts and variance calculator

    NASA Astrophysics Data System (ADS)

    Bland-Hawthorn, Joss; Ellis, Simon

    2013-12-01

    GalaxyCount calculates the number and standard deviation of galaxies in a magnitude limited observation of a given area. The methods to calculate both the number and standard deviation may be selected from different options. Variances may be computed for circular, elliptical and rectangular window functions.

  17. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  18. A Visual Model for the Variance and Standard Deviation

    ERIC Educational Resources Information Center

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  19. Variance in Math Achievement Attributable to Visual Cognitive Constructs

    ERIC Educational Resources Information Center

    Oehlert, Jeremy J.

    2012-01-01

    Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…

  20. Caution on the Use of Variance Ratios: A Comment.

    ERIC Educational Resources Information Center

    Shaffer, Juliet Popper

    1992-01-01

    Several metanalytic studies of group variability use variance ratios as measures of effect size. Problems with this approach are discussed, including limitations of using means and medians of ratios. Mean logarithms and the geometric mean are not adversely affected by the arbitrary choice of numerator. (SLD)

  1. Some Computer Programs for Selected Problems in Analysis of Variance.

    ERIC Educational Resources Information Center

    Edwards, Lynne K.; Bland, Patricia C.

    Selected examples using the statistical packages Statistical Package for the Social Sciences (SPSS), the Statistical Analysis System (SAS), and BMDP are presented to facilitate their use and encourage appropriate uses in: (1) a hierarchical design; (2) a confounded factorial design; and (3) variance component estimation procedures. To illustrate…

  2. Module organization and variance in protein-protein interaction networks

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Yu; Lee, Tsai-Ling; Chiu, Yi-Yuan; Lin, Yi-Wei; Lo, Yu-Shu; Lin, Chih-Ta; Yang, Jinn-Moon

    2015-03-01

    A module is a group of closely related proteins that act in concert to perform specific biological functions through protein-protein interactions (PPIs) that occur in time and space. However, the underlying module organization and variance remain unclear. In this study, we collected module templates to infer respective module families, including 58,041 homologous modules in 1,678 species, and PPI families using searches of complete genomic database. We then derived PPI evolution scores and interface evolution scores to describe the module elements, including core and ring components. Functions of core components were highly correlated with those of essential genes. In comparison with ring components, core proteins/PPIs were conserved across multiple species. Subsequently, protein/module variance of PPI networks confirmed that core components form dynamic network hubs and play key roles in various biological functions. Based on the analyses of gene essentiality, module variance, and gene co-expression, we summarize the observations of module organization and variance as follows: 1) a module consists of core and ring components; 2) core components perform major biological functions and collaborate with ring components to execute certain functions in some cases; 3) core components are more conserved and essential during organizational changes in different biological states or conditions.

  3. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2012-07-01 2012-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...

  4. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2013-07-01 2013-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...

  5. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...

  6. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...

  7. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...

  8. 40 CFR 142.43 - Disposition of a variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specified in § 142.40(a) such notice shall provide that the variance will be terminated when the system... Administrator that the system has failed to comply with any requirements of a final schedule issued pursuant to... health of persons or upon a finding that the public water system has failed to comply with monitoring...

  9. 40 CFR 142.43 - Disposition of a variance request.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specified in § 142.40(a) such notice shall provide that the variance will be terminated when the system... Administrator that the system has failed to comply with any requirements of a final schedule issued pursuant to... health of persons or upon a finding that the public water system has failed to comply with monitoring...

  10. Unbiased Estimates of Variance Components with Bootstrap Procedures

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  11. Genetic Variance in the SES-IQ Correlation.

    ERIC Educational Resources Information Center

    Eckland, Bruce K.

    1979-01-01

    Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)

  12. Infinite variance in fermion quantum Monte Carlo calculations.

    PubMed

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling. PMID:27078480

  13. Variance Estimation of Imputed Survey Data. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Brick, Mike; Kaufman, Steven; Walter, Elizabeth

    Missing data is a common problem in virtually all surveys. This study focuses on variance estimation and its consequences for analysis of survey data from the National Center for Education Statistics (NCES). Methods suggested by C. Sarndal (1992), S. Kaufman (1996), and S. Shao and R. Sitter (1996) are reviewed in detail. In section 3, the…

  14. Exploratory Multivariate Analysis of Variance: Contrasts and Variables.

    ERIC Educational Resources Information Center

    Barcikowski, Robert S.; Elliott, Ronald S.

    The contribution of individual variables to overall multivariate significance in a multivariate analysis of variance (MANOVA) is investigated using a combination of canonical discriminant analysis and Roy-Bose simultaneous confidence intervals. Difficulties with this procedure are discussed, and its advantages are illustrated using examples based…

  15. [ECoG classification based on wavelet variance].

    PubMed

    Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin

    2013-06-01

    For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300

  16. Facile synthesis of N-rich carbon quantum dots by spontaneous polymerization and incision of solvents as efficient bioimaging probes and advanced electrocatalysts for oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Lei, Zhouyue; Xu, Shengjie; Wan, Jiaxun; Wu, Peiyi

    2016-01-01

    In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread commercial application prospects, not only as bioimaging probes but also as promising electrocatalysts for the metal-free ORR.In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread

  17. Variance in the reproductive success of dominant male mountain gorillas.

    PubMed

    Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M

    2014-10-01

    Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species.

  18. Gravity Wave Variances and Propagation Derived from AIRS Radiances

    NASA Technical Reports Server (NTRS)

    Gong, Jie; Wu, Dong L.; Eckermann, S. D.

    2012-01-01

    As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).

  19. DEVELOPMENT OF A NOVEL RADIATIVELY/CONDUCTIVELY STABILIZED BURNER FOR SIGNIFICANT REDUCTION OF NOx EMISSIONS AND FOR ADVANCING THE MODELING AND UNDERSTANDING OF PULVERIZED COAL COMBUSTION AND EMISSIONS

    SciTech Connect

    Noam Lior; Stuart W. Churchill

    2003-10-01

    the Gordon Conference on Modern Development in Thermodynamics. The results obtained are very encouraging for the development of the RCSC as a commercial burner for significant reduction of NO{sub x} emissions, and highly warrants further study and development.

  20. Recent advances in the management of chronic stable angina II. Anti-ischemic therapy, options for refractory angina, risk factor reduction, and revascularization

    PubMed Central

    Kones, Richard

    2010-01-01

    The objectives in treating angina are relief of pain and prevention of disease progression through risk reduction. Mechanisms, indications, clinical forms, doses, and side effects of the traditional antianginal agents – nitrates, β-blockers, and calcium channel blockers – are reviewed. A number of patients have contraindications or remain unrelieved from anginal discomfort with these drugs. Among newer alternatives, ranolazine, recently approved in the United States, indirectly prevents the intracellular calcium overload involved in cardiac ischemia and is a welcome addition to available treatments. None, however, are disease-modifying agents. Two options for refractory angina, enhanced external counterpulsation and spinal cord stimulation (SCS), are presented in detail. They are both well-studied and are effective means of treating at least some patients with this perplexing form of angina. Traditional modifiable risk factors for coronary artery disease (CAD) – smoking, hypertension, dyslipidemia, diabetes, and obesity – account for most of the population-attributable risk. Individual therapy of high-risk patients differs from population-wide efforts to prevent risk factors from appearing or reducing their severity, in order to lower the national burden of disease. Current American College of Cardiology/American Heart Association guidelines to lower risk in patients with chronic angina are reviewed. The Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE) trial showed that in patients with stable angina, optimal medical therapy alone and percutaneous coronary intervention (PCI) with medical therapy were equal in preventing myocardial infarction and death. The integration of COURAGE results into current practice is discussed. For patients who are unstable, with very high risk, with left main coronary artery lesions, in whom medical therapy fails, and in those with acute coronary syndromes, PCI is indicated. Asymptomatic

  1. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Quarterly technical progress report, [July--September 1995

    SciTech Connect

    1995-12-31

    This project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. The project provides a stepwise evaluation of the following NO{sub x} reduction technologies: Advanced overfire air (AOFA), Low NO{sub x} burners (LNB), LNB with AOFA, and advanced digital controls and optimization strategies. The project has completed the baseline, AOFA, LNB, and LNB+AOFA test segments, fulfilling all testing originally proposed to DOE. Phase 4 of the project, demonstration of advanced control/optimization methodologies for NO{sub x} abatement, is now in progress. The methodology selected for demonstration at Hammond Unit 4 is the Generic NO{sub x} Control Intelligent System (GNOCIS), which is being developed by a consortium consisting of the Electric Power Research Institute, PowerGen, Southern Company, Radian Corporation, U.K. Department of Trade and Industry, and U.S. Department of Energy. GNOCIS is a methodology that can result in improved boiler efficiency and reduced NO{sub x} emissions from fossil fuel fired boilers. Using a numerical model of the combustion process, GNOCIS applies an optimizing procedure to identify the best set points for the plant on a continuous basis. GNOCIS is in progress at Alabama Power`s Gaston Unit 4 and PowerGen`s Kingsnorth Unit 1. The first commercial demonstration of GNOCIS will be at Hammond 4.

  2. Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy

    NASA Astrophysics Data System (ADS)

    Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.

    2016-08-01

    We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.

  3. Fidelity between Gaussian mixed states with quantum state quadrature variances

    NASA Astrophysics Data System (ADS)

    Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao

    2016-04-01

    In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).

  4. Climate variance influence on the non-stationary plankton dynamics.

    PubMed

    Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine

    2013-08-01

    We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.

  5. The return of the variance: intraspecific variability in community ecology.

    PubMed

    Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie

    2012-04-01

    Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.

  6. Fidelity between Gaussian mixed states with quantum state quadrature variances

    NASA Astrophysics Data System (ADS)

    Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao

    2016-04-01

    In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).

  7. Analysis of variance in spectroscopic imaging data from human tissues.

    PubMed

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  8. Female copying increases the variance in male mating success.

    PubMed

    Wade, M J; Pruett-Jones, S G

    1990-08-01

    Theoretical models of sexual selection assume that females choose males independently of the actions and choice of other individual females. Variance in male mating success in promiscuous species is thus interpreted as a result of phenotypic differences among males which females perceive and to which they respond. Here we show that, if some females copy the behavior of other females in choosing mates, the variance in male mating success and therefore the opportunity for sexual selection is greatly increased. Copying behavior is most likely in non-resource-based harem and lek mating systems but may occur in polygynous, territorial systems as well. It can be shown that copying behavior by females is an adaptive alternative to random choice whenever there is a cost to mate choice. We develop a statistical means of estimating the degree of female copying in natural populations where it occurs. PMID:2377613

  9. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  10. No evidence for anomalously low variance circles on the sky

    SciTech Connect

    Moss, Adam; Scott, Douglas; Zibin, James P. E-mail: dscott@phas.ubc.ca

    2011-04-01

    In a recent paper, Gurzadyan and Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan and Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.

  11. A surface layer variance heat budget for ENSO

    NASA Astrophysics Data System (ADS)

    Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.

    2015-05-01

    Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.

  12. Response variance in functional maps: neural darwinism revisited.

    PubMed

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  13. Constraining the local variance of H0 from directional analyses

    NASA Astrophysics Data System (ADS)

    Bengaly, C. A. P., Jr.

    2016-04-01

    We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.

  14. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  15. End-state comfort and joint configuration variance during reaching.

    PubMed

    Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J; Rosenbaum, David A; Scholz, John P; Zatsiorsky, Vladimir M; Latash, Mark L

    2013-03-01

    This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326

  16. End-state comfort and joint configuration variance during reaching

    PubMed Central

    Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J.; Rosenbaum, David A.; Scholz, John P.; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2013-01-01

    This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326

  17. Analysis of Variance in the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  18. A candidate mechanism underlying the variance of interictal spike propagation

    PubMed Central

    Sabolek, Helen R; Swiercz, Waldemar B.; Lillis, Kyle; Cash, Sydney S.; Huberfeld, Gilles; Zhao, Grace; Marie, Linda Ste.; Clemenceau, Stéphane; Barsh, Greg; Miles, Richard; Staley, Kevin J.

    2012-01-01

    Synchronous activation of neural networks is an important physiological mechanism, and dysregulation of synchrony forms the basis of epilepsy. We analyzed the propagation of synchronous activity through chronically epileptic neural networks. Electrocortigraphic recordings from epileptic patients demonstrate remarkable variance in the pathways of propagation between sequential interictal spikes (IIS). Calcium imaging in chronically epileptic slice cultures demonstrates that pathway variance depends on the presence of GABAergic inhibition and that spike propagation becomes stereotyped following GABA-R blockade. Computer modeling suggests that GABAergic quenching of local network activations leaves behind regions of refractory neurons, whose late recruitment forms the anatomical basis of variability during subsequent network activation. Targeted path scanning of slice cultures confirmed local activations, while ex vivo recordings of human epileptic tissue confirmed the dependence of interspike variance on GABA-mediated inhibition. These data support the hypothesis that the paths by which synchronous activity spread through an epileptic network change with each activation, based on the recent history of localized activity that has been successfully inhibited. PMID:22378874

  19. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.

  20. VAPOR: variance-aware per-pixel optimal resource allocation.

    PubMed

    Eisenberg, Yiftach; Zhai, Fan; Pappas, Thrasyvoulos N; Berry, Randall; Katsaggelos, Aggelos K

    2006-02-01

    Characterizing the video quality seen by an end-user is a critical component of any video transmission system. In packet-based communication systems, such as wireless channels or the Internet, packet delivery is not guaranteed. Therefore, from the point-of-view of the transmitter, the distortion at the receiver is a random variable. Traditional approaches have primarily focused on minimizing the expected value of the end-to-end distortion. This paper explores the benefits of accounting for not only the mean, but also the variance of the end-to-end distortion when allocating limited source and channel resources. By accounting for the variance of the distortion, the proposed approach increases the reliability of the system by making it more likely that what the end-user sees, closely resembles the mean end-to-end distortion calculated at the transmitter. Experimental results demonstrate that variance-aware resource allocation can help limit error propagation and is more robust to channel-mismatch than approaches whose goal is to strictly minimize the expected distortion. PMID:16479799

  1. Variance in prey abundance influences time budgets of breeding seabirds: Evidence from pigeon guillemots Cepphus columba

    USGS Publications Warehouse

    Litzow, M.A.; Piatt, J.F.

    2003-01-01

    We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.

  2. A new approach for crop identification with wavelet variance and JM distance.

    PubMed

    Qiu, Bingwen; Fan, Zhanling; Zhong, Ming; Tang, Zhenghong; Chen, Chongcheng

    2014-11-01

    This paper develops a new crop mapping method through combined utilization of both time and frequency information based on wavelet variance and Jeffries-Matusita (JM) distance (CIWJ for short). A two-dimensional wavelet spectrum was obtained from datasets of daily continuous vegetation indices through a continuous wavelet transform using the Mexican hat and the Morlet mother wavelets. The time-average wavelet variance (TAWV) and the scale-average wavelet variance (SAWV) were then calculated based on the wavelet spectrum of the Mexican hat and the Morlet wavelet, respectively. The class separability based on the JM distance was evaluated to discriminate the proper period or scale range applied. Finally, a procedure for criteria quantification was developed using the TAWV and SAWV as the major metrics, and the similarity between unclassified pixels and established land use/cover types was calculated. The proposed CIWJ method was applied to the middle Hexi Corridor in northwest China using 250-m 8-day composite moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index (EVI) time series datasets in 2012. The CIWJ method was shown to be efficient in crop field mapping, with an overall accuracy of 83.6 % and kappa coefficient of 0.7009, assessed with 30 m Chinese Environmental Disaster Reduction Satellite (HJ-1)-derived data. Compared with methods utilizing information on either frequency or time, the CIWJ method demonstrates tremendous potential for efficient crop mapping and for further applications. This method could be applied to either coarse or high spatial resolution images for agricultural crop identification, as well as other more general or specific land use classifications.

  3. Estimation of Noise-Free Variance to Measure Heterogeneity

    PubMed Central

    Winkler, Tilo; Melo, Marcos F. Vidal; Degani-Costa, Luiza H.; Harris, R. Scott; Correia, John A.; Musch, Guido; Venegas, Jose G.

    2015-01-01

    Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET) scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV2). The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CVr2) for comparison with our estimate of noise-free or ‘true’ heterogeneity (CVt2). We found that CVt2 was only 5.4% higher than CVr2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using 13NN-saline injection. The mean CVt2 was 0.10 (range: 0.03–0.30), while the mean CV2 including noise was 0.24 (range: 0.10–0.59). CVt2 was in average 41.5% of the CV2 measured including noise (range: 17.8–71.2%). The reproducibility of CVt2 was evaluated using three repeated PET scans from five subjects. Individual CVt2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CVt2 in PET scans, and may be useful for similar statistical problems in experimental data. PMID:25906374

  4. A proxy for variance in dense matching over homogeneous terrain

    NASA Astrophysics Data System (ADS)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  5. Reducing experimental variability in variance-based sensitivity analysis of biochemical reaction systems.

    PubMed

    Zhang, Hong-Xuan; Goutsias, John

    2011-03-21

    Sensitivity analysis is a valuable task for assessing the effects of biological variability on cellular behavior. Available techniques require knowledge of nominal parameter values, which cannot be determined accurately due to experimental uncertainty typical to problems of systems biology. As a consequence, the practical use of existing sensitivity analysis techniques may be seriously hampered by the effects of unpredictable experimental variability. To address this problem, we propose here a probabilistic approach to sensitivity analysis of biochemical reaction systems that explicitly models experimental variability and effectively reduces the impact of this type of uncertainty on the results. The proposed approach employs a recently introduced variance-based method to sensitivity analysis of biochemical reaction systems [Zhang et al., J. Chem. Phys. 134, 094101 (2009)] and leads to a technique that can be effectively used to accommodate appreciable levels of experimental variability. We discuss three numerical techniques for evaluating the sensitivity indices associated with the new method, which include Monte Carlo estimation, derivative approximation, and dimensionality reduction based on orthonormal Hermite approximation. By employing a computational model of the epidermal growth factor receptor signaling pathway, we demonstrate that the proposed technique can greatly reduce the effect of experimental variability on variance-based sensitivity analysis results. We expect that, in cases of appreciable experimental variability, the new method can lead to substantial improvements over existing sensitivity analysis techniques.

  6. Turbulent-Heat-Flux and Temperature-Variance Budgets in a Single-Rib Mounting Channel

    NASA Astrophysics Data System (ADS)

    Miura, Takahiro; Matsubara, Koji; Sakurai, Atsushi

    Heat transfer and fluid flow in a single-rib mounting channel were investigated by directly solving Navier-Stokes and energy equations. The flow and thermal fields were considered to be fully developed at the inlet of the channel, and the simulation was made for spatial advancement of turbulent heat transfer. The Reynolds number based on the friction velocity at the inlet and the channel half width was 150. The Prandtl number was 0.71. The budgets for turbulent heat fluxes and temperature variance at various sections were presented and were investigated, which would be useful for testing and developing turbulence models. Near a circular vortex in front of the rib, pressure diffusion terms made an important contribution. Remarkable production terms were generated near a reattachment point. Production and dissipation terms were not dominant in front of and above the rib, and a time scale ratio exceeded 2.0 in the region.

  7. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal fired boilers. Second quarterly technical progress report, [April--June 1993

    SciTech Connect

    Not Available

    1993-12-31

    The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. Baseline, AOFA, and LNB without AOFA test segments have been completed. Analysis of the 94 days of LNB long-term data collected show the full-load NO{sub x} emission levels to be approximately 0.65 lb/MBtu with flyash LOI values of approximately 8 percent. Corresponding values for the AOFA configuration are 0.94 lb/MBtu and approximately 10 percent. For comparison, the long-term full-load, baseline NO{sub x} emission level was approximately 1.24 lb/MBtu at 5.2 percent LOI. Comprehensive testing of the LNB plus AOFA configuration began in May 1993 and is scheduled to end during August 1993. As of June 30, the diagnostic, performance, chemical emissions tests segments for this configuration have been conducted and 29 days of long-term, emissions data collected. Preliminary results from the May--June 1993 tests of the LNB plus AOFA system show that the full load NO{sub x} emissions are approximately 0.42 lb/MBtu with corresponding fly ash LOI values near 8 percent. This is a substantial improvement in both NO{sub x} emissions and LOI values when compared to the results obtained during the February--March 1992 abbreviated testing of this system.

  8. Compression station upgrades include advanced noise reduction

    SciTech Connect

    Dunning, V.R.; Sherikar, S.

    1998-10-01

    Since its inception in the mid-`80s, AlintaGas` Dampier to Bunbury natural gas pipeline has been constantly undergoing a series of upgrades to boost capacity and meet other needs. Extending northward about 850 miles from near Perth to the northwest shelf, the 26-inch line was originally served by five compressor stations. In the 1989-91 period, three new compressor stations were added to increase capacity and a ninth station was added in 1997. Instead of using noise-path-treatment mufflers to reduce existing noise, it was decided to use noise-source-treatment technology to prevent noise creation in the first place. In the field, operation of these new noise-source treatment attenuators has been very quiet. If there was any thought earlier of guaranteed noise-level verification, it is not considered a priority now. It`s also anticipated that as AlintaGas proceeds with its pipeline and compressor station upgrade program, similar noise-source treatment equipment will be employed and retrofitted into older stations where the need to reduce noise and potential radiant-heat exposure is indicated.

  9. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  10. FMRI group analysis combining effect estimates and their variances

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.

    2012-01-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

  11. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  12. THE COLUMN DENSITY VARIANCE-M{sub s} RELATIONSHIP

    SciTech Connect

    Burkhart, Blakesley; Lazarian, A.

    2012-08-10

    Although there is a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are few observational studies investigating the relationship between the density variance ({sigma}{sup 2}) and the sonic Mach number (M{sub s}). This is in part due to the fact that the {sigma}{sup 2}-M{sub s} relationship is derived, via MHD simulations, for the three-dimensional (3D) density variance only, which is not a direct observable. We investigate the utility of a 2D column density {sigma}{sub {Sigma}/{Sigma}0}{sup 2}-M{sub s} relationship using solenoidally driven isothermal MHD simulations and find that the best fit follows closely the form of the 3D density {sigma}{sub {rho}/{rho}0}{sup 2}-M{sub s} trend but includes a scaling parameter A such that {sigma}{sub ln({Sigma}/{Sigma}o)} = A x ln(1+b{sup 2} M{sub s}{sup 2}), where A = 0.11 and b = 1/3. This relation is consistent with the observational data reported for the Taurus and IC 5146 molecular clouds with b = 0.5 and A = 0.16, and b = 0.5 and A = 0.12, respectively. These results open up the possibility of using the 2D column density values of {sigma}{sup 2} for investigations of the relation between the sonic Mach number and the probability distribution function (PDF) variance in addition to existing PDF sonic Mach number relations.

  13. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    USGS Publications Warehouse

    Budde, M.E.; Tappan, G.; Rowland, J.; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  14. Multi-observable Uncertainty Relations in Product Form of Variances

    NASA Astrophysics Data System (ADS)

    Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing

    2016-08-01

    We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented.

  15. Multi-observable Uncertainty Relations in Product Form of Variances.

    PubMed

    Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing

    2016-01-01

    We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851

  16. Critical points of multidimensional random Fourier series: Variance estimates

    NASA Astrophysics Data System (ADS)

    Nicolaescu, Liviu I.

    2016-08-01

    We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.

  17. Multi-observable Uncertainty Relations in Product Form of Variances

    PubMed Central

    Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing

    2016-01-01

    We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851

  18. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  19. Analysis of variance of thematic mapping experiment data.

    USGS Publications Warehouse

    Rosenfield, G.H.

    1981-01-01

    As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author

  20. Two-dimensional finite-element temperature variance analysis

    NASA Technical Reports Server (NTRS)

    Heuser, J. S.

    1972-01-01

    The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

  1. Large-scale magnetic variances near the South Solar Pole

    NASA Technical Reports Server (NTRS)

    Jokipii, J. R.; Kota, J.; Smith, E.; Horbury, T.; Giacalone, J.

    1995-01-01

    We summarize recent Ulysses observations of the variances over large temporal scales in the interplanetary magnetic field components and their increase as Ulysses approached the South Solar Pole. A model of these fluctuations is shown to provide a very good fit to the observed amplitude and temporal variation of the fluctuations. In addition, the model predicts that the transport of cosmic rays in the heliosphere will be significantly altered by this level of fluctuations. In addition to altering the inward diffusion and drift access of cosmic rays over the solar poles, we find that the magnetic fluctuations also imply a large latitudinal diffusion, caused primarily by the associated field-line random walk.

  2. Recognition by variance: learning rules for spatiotemporal patterns.

    PubMed

    Barak, Omri; Tsodyks, Misha

    2006-10-01

    Recognizing specific spatiotemporal patterns of activity, which take place at timescales much larger than the synaptic transmission and membrane time constants, is a demand from the nervous system exemplified, for instance, by auditory processing. We consider the total synaptic input that a single readout neuron receives on presentation of spatiotemporal spiking input patterns. Relying on the monotonic relation between the mean and the variance of a neuron's input current and its spiking output, we derive learning rules that increase the variance of the input current evoked by learned patterns relative to that obtained from random background patterns. We demonstrate that the model can successfully recognize a large number of patterns and exhibits a slow deterioration in performance with increasing number of learned patterns. In addition, robustness to time warping of the input patterns is revealed to be an emergent property of the model. Using a leaky integrate-and-fire realization of the readout neuron, we demonstrate that the above results also apply when considering spiking output. PMID:16907629

  3. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  4. Concentration variance decay during magma mixing: a volcanic chronometer.

    PubMed

    Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B

    2015-01-01

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555

  5. Variance in saccadic eye movements reflects stable traits.

    PubMed

    Meyhöfer, Inga; Bertsch, Katja; Esser, Moritz; Ettinger, Ulrich

    2016-04-01

    Saccadic tasks are widely used to study cognitive processes, effects of pharmacological treatments, and mechanisms underlying psychiatric disorders. In genetic studies, it is assumed that saccadic endophenotypes are traits. While internal consistency and temporal stability of saccadic performance is high for most of the measures, the magnitude of underlying trait components has not been estimated, and influences of situational aspects and person by situation interactions have not been investigated. To do so, 68 healthy participants performed prosaccades, antisaccades, and memory-guided saccades on three occasions at weekly intervals at the same time of day. Latent state-trait modeling was applied to estimate the proportions of variance reflecting stable trait components, situational influences, and Person × Situation interaction effects. Mean variables for all saccadic tasks showed high to excellent reliabilities. Intraindividual standard deviations were found to be slightly less reliable. Importantly, an average of 60% of variance of a single measurement was explained by trans-situationally stable person effects, while situation aspects and interactions between person and situation were found to play a negligible role. We conclude that saccadic variables, in standard laboratory settings, represent highly reliable measures that are largely unaffected by situational influences. Extending previous reliability studies, these findings clearly demonstrate the trait-like nature of these measures and support their role as endophenotypes.

  6. Cosmic Variance in the Nanohertz Gravitational Wave Background

    NASA Astrophysics Data System (ADS)

    Roebber, Elinore; Holder, Gilbert; Holz, Daniel E.; Warren, Michael

    2016-03-01

    We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes (SMBBHs) to estimate the formation rates of SMBBH binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive (≲ 10%) to cosmological parameters, with only slight variation between wmap5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of ∼2. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly generated populations of SMBBHs, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of ∼10 near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty on the amplitude of the underlying power law spectrum. This Poisson uncertainty dominates at ≳ 20 nHz, while at lower frequencies the dominant uncertainty is related to our poor understanding of the astrophysical scaling relations, although very low frequencies may be dominated by uncertainties related to the final parsec problem and the processes which drive binaries to the gravitational wave dominated regime. Cosmological effects are negligible at all frequencies.

  7. Concentration variance decay during magma mixing: a volcanic chronometer

    PubMed Central

    Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.

    2015-01-01

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555

  8. Hydraulic geometry of river cross sections; theory of minimum variance

    USGS Publications Warehouse

    Williams, Garnett P.

    1978-01-01

    This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)

  9. Argentine Population Genetic Structure: Large Variance in Amerindian Contribution

    PubMed Central

    Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.

    2011-01-01

    Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183

  10. Variance of the Quantum Dwell Time for a Nonrelativistic Particle

    NASA Technical Reports Server (NTRS)

    Hahne, Gerhard

    2012-01-01

    Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.

  11. Discordance of DNA methylation variance between two accessible human tissues.

    PubMed

    Jiang, Ruiwei; Jones, Meaghan J; Chen, Edith; Neumann, Sarah M; Fraser, Hunter B; Miller, Gregory E; Kobor, Michael S

    2015-01-01

    Population epigenetic studies have been seeking to identify differences in DNA methylation between specific exposures, demographic factors, or diseases in accessible tissues, but relatively little is known about how inter-individual variability differs between these tissues. This study presents an analysis of DNA methylation differences between matched peripheral blood mononuclear cells (PMBCs) and buccal epithelial cells (BECs), the two most accessible tissues for population studies, in 998 promoter-located CpG sites. Specifically we compared probe-wise DNA methylation variance, and how this variance related to demographic factors across the two tissues. PBMCs had overall higher DNA methylation than BECs, and the two tissues tended to differ most at genomic regions of low CpG density. Furthermore, although both tissues showed appreciable probe-wise variability, the specific regions and magnitude of variability differed strongly between tissues. Lastly, through exploratory association analysis, we found indication of differential association of BEC and PBMC with demographic variables. The work presented here offers insight into variability of DNA methylation between individuals and across tissues and helps guide decisions on the suitability of buccal epithelial or peripheral mononuclear cells for the biological questions explored by epigenetic studies in human populations.

  12. Implications and applications of the variance-based uncertainty equalities

    NASA Astrophysics Data System (ADS)

    Yao, Yao; Xiao, Xing; Wang, Xiaoguang; Sun, C. P.

    2015-06-01

    In quantum mechanics, the variance-based Heisenberg-type uncertainty relations are a series of mathematical inequalities posing the fundamental limits on the achievable accuracy of the state preparations. In contrast, we construct and formulate two quantum uncertainty equalities, which hold for all pairs of incompatible observables and indicate the new uncertainty relations recently introduced by L. Maccone and A. K. Pati [Phys. Rev. Lett. 113, 260401 (2014), 10.1103/PhysRevLett.113.260401]. In fact, we obtain a series of inequalities with hierarchical structure, including the Maccone-Pati's inequalities as a special (weakest) case. Furthermore, we present an explicit interpretation lying behind the derivations and relate these relations to the so-called intelligent states. As an illustration, we investigate the properties of these uncertainty inequalities in the qubit system and a state-independent bound is obtained for the sum of variances. Finally, we apply these inequalities to the spin squeezing scenario and its implication in interferometric sensitivity is also discussed.

  13. Concentration variance decay during magma mixing: a volcanic chronometer

    NASA Astrophysics Data System (ADS)

    Perugini, D.; De Campos, C. P.; Petrelli, M.; Dingwell, D. B.

    2015-12-01

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.

  14. Concentration variance decay during magma mixing: a volcanic chronometer

    NASA Astrophysics Data System (ADS)

    Perugini, Diego; de Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.

    2015-09-01

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.

  15. A minimum variance method for genome-wide data-driven normalization of quantitative real-time polymerase chain reaction expression data.

    PubMed

    Garcia, Benjamin; Walter, Nicholas D; Dolganov, Gregory; Coram, Marc; Davis, J Lucian; Schoolnik, Gary K; Strong, Michael

    2014-08-01

    Advances in multiplex qRT-PCR have enabled increasingly accurate and robust quantification of RNA, even at lower concentrations, facilitating RNA expression profiling in clinical and environmental samples. Here we describe a data-driven qRT-PCR normalization method, the minimum variance method, and evaluate it on clinically derived Mycobacterium tuberculosis samples with variable transcript detection percentages. For moderate to significant amounts of nondetection (∼50%), our minimum variance method consistently produces the lowest false discovery rates compared to commonly used data-driven normalization methods.

  16. Effective dimension reduction for sparse functional data

    PubMed Central

    YAO, F.; LEI, E.; WU, Y.

    2015-01-01

    Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293

  17. Robust Techniques for Testing Heterogeneity of Variance Effects in Factorial Designs.

    ERIC Educational Resources Information Center

    O'Brien, Ralph G.

    1978-01-01

    Several ways of using traditional analysis of variance to test the homogeneity of variance in factorial designs with equal or unequal cell sizes are compared using theoretical and Monte Carlo results. (Author/JKS)

  18. Matrix Differencing as a Concise Expression of Test Variance: A Computer Implementation.

    ERIC Educational Resources Information Center

    Krus, David J.; Wilkinson, Sue Marie

    1986-01-01

    Matrix differencing of data vectors is introduced as a method for computing test variance and is compared to traditional analysis of variance. Applications for computer assisted instruction, provided by supplemental computer software, are also described. (Author/GDC)

  19. 40 CFR 142.301 - What is a small system variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...

  20. Understanding the influence of watershed storage caused by human interferences on ET variance

    NASA Astrophysics Data System (ADS)

    Zeng, R.; Cai, X.

    2014-12-01

    Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.

  1. Analysis of variance of an underdetermined geodetic displacement problem

    SciTech Connect

    Darby, D.

    1982-06-01

    It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

  2. Variance of indoor radon concentration: Major influencing factors.

    PubMed

    Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. PMID:26409145

  3. Linear minimum variance filters applied to carrier tracking

    NASA Technical Reports Server (NTRS)

    Gustafson, D. E.; Speyer, J. L.

    1976-01-01

    A new approach is taken to the problem of tracking a fixed amplitude signal with a Brownian-motion phase process. Classically, a first-order phase-lock loop (PLL) is used; here, the problem is treated via estimation of the quadrature signal components. In this space, the state dynamics are linear with white multiplicative noise. Therefore, linear minimum-variance filters, which have a particularly simple mechanization, are suggested. The resulting error dynamics are linear at any signal/noise ratio, unlike the classical PLL. During synchronization, and above threshold, this filter with constant gains degrades by 3 per cent in output rms phase error with respect to the classical loop. However, up to 80 per cent of the maximum possible noise improvement is obtained below threshold, where the classical loop is nonoptimum, as demonstrated by a Monte Carlo analysis. Filter mechanizations are presented for both carrier and baseband operation.

  4. The genetic and environmental variance underlying elementary cognitive tasks.

    PubMed

    Petrill, S A; Thompson, L A; Detterman, D K

    1995-05-01

    Although previous studies have examined the genetic and environmental influences upon general intelligence and specific cognitive abilities in school-age children, few studies have examined elementary cognitive tasks in this population. The current study included 149 MZ and 138 same-sex DZ twin pairs who participated in the Western Reserve Twin Project. Thirty measures from the Cognitive Abilities Test (CAT; Detterman, 1986) were studied. Results indicate that (1) these measures are reliable indicators of general intelligence in children and (2) the structure of genetic and environmental influences varies across measures. These results not only indicate that elementary cognitive tasks display heterogeneous genetic and environmental effects, but also may demonstrate that individual differences in biologically based processes are not necessarily due to genetic variance.

  5. Errors in radial velocity variance from Doppler wind lidar

    NASA Astrophysics Data System (ADS)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; Pryor, S. C.

    2016-08-01

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Using both statistically simulated and observed data, this paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, the systematic error is negligible but the random error exceeds about 10 %.

  6. The use of analysis of variance procedures in biological studies

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  7. Correct use of repeated measures analysis of variance.

    PubMed

    Park, Eunsik; Cho, Meehye; Ki, Chang-Seok

    2009-02-01

    In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).

  8. Slim completions offer limited stimulation variances: Part 3

    SciTech Connect

    Brunsman, B.J. ); Matson, R. ); Shook, R.A. )

    1994-12-01

    This is the third in a series of five articles addressing barriers to increased US utilization of slimhole drilling and completion techniques. Previous articles discussed slimhole drilling and cementing. The focus of this article is stimulation, with an emphasis on hydraulic fracturing. This series is based on a study conducted for Gas Research institute (GRI) by an industry team consisting of Maurer Engineering, BJ Services, Baker Oil tools, and Halliburton. Parts 1 and 2 were published in the September and October 1994 issues of Petroleum Engineer International, respectively. Potential cost saving resulting from slimhole drilling and completions of gas wells are often inhibited by the limitations on hydraulic fracturing. Variances from conventional fracturing include excessive friction pressure, fracture fluid degradation due to excessive shear rates, proppant bridging and limited diverting options.

  9. Estimation of measurement variance in the context of environment statistics

    NASA Astrophysics Data System (ADS)

    Maiti, Pulakesh

    2015-02-01

    The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.

  10. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  11. INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND

    SciTech Connect

    TenBarge, J. M.; Klein, K. G.; Howes, G. G.; Podesta, J. J.

    2012-07-10

    The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.

  12. Low variance at large scales of WMAP 9 year data

    SciTech Connect

    Gruppuso, A.; Finelli, F.; Rosa, A. De; Mandolesi, N.; Natoli, P.; Paci, F.; Molinari, D. E-mail: natoli@fe.infn.it E-mail: finelli@iasfbo.inaf.it E-mail: derosa@iasfbo.inaf.it

    2013-07-01

    We use an optimal estimator to study the variance of the WMAP 9 CMB field at low resolution, in both temperature and polarization. Employing realistic Monte Carlo simulation, we find statistically significant deviations from the ΛCDM model in several sky cuts for the temperature field. For the considered masks in this analysis, which cover at least the 54% of the sky, the WMAP 9 CMB sky and ΛCDM are incompatible at ≥ 99.94% C.L. at large angles ( > 5°). We find instead no anomaly in polarization. As a byproduct of our analysis, we present new, optimal estimates of the WMAP 9 CMB angular power spectra from the WMAP 9 year data at low resolution.

  13. Variance estimation for the Federal Waterfowl Harvest Surveys

    USGS Publications Warehouse

    Geissler, P.H.

    1988-01-01

    The Federal Waterfowl Harvest Surveys provide estimates of waterfowl harvest by species for flyways and states, harvests of most other migratory game bird species (by waterfowl hunters), crippling losses for ducks, geese, and coots, days hunted, and bag per hunter. The Waterfowl Hunter Questionnaire Survey separately estimates the harvest of ducks and geese using cluster samples of hunters who buy duck stamps at sample post offices. The Waterfowl Parts Collection estimates species, age, and sex ratios from parts solicited from successful hunters who responded to the Waterfowl Hunter Questionnaire Survey in previous years. These ratios are used to partition the duck and goose harvest into species, age, and sex specific harvest estimates. Annual estimates are correlated because successful hunters who respond to the Questionnaire Survey in one year may be asked to contribute to the Parts Collection for the next three years. Bootstrap variance estimates are used because covariances among years are difficult to estimate.

  14. From means and variances to persons and patterns

    PubMed Central

    Grice, James W.

    2015-01-01

    A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672

  15. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    NASA Astrophysics Data System (ADS)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  16. Variance of indoor radon concentration: Major influencing factors.

    PubMed

    Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.

  17. A method for the microlensed flux variance of QSOs

    NASA Astrophysics Data System (ADS)

    Goodman, Jeremy; Sun, Ai-Lei

    2014-06-01

    A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.

  18. The dynamic Allan variance II: a fast computational algorithm.

    PubMed

    Galleani, Lorenzo

    2010-01-01

    The stability of an atomic clock can change with time due to several factors, such as temperature, humidity, radiations, aging, and sudden breakdowns. The dynamic Allan variance, or DAVAR, is a representation of the time-varying stability of an atomic clock, and it can be used to monitor the clock behavior. Unfortunately, the computational time of the DAVAR grows very quickly with the length of the analyzed time series. In this article, we present a fast algorithm for the computation of the DAVAR, and we also extend it to the case of missing data. Numerical simulations show that the fast algorithm dramatically reduces the computational time. The fast algorithm is useful when the analyzed time series is long, or when many clocks must be monitored, or when the computational power is low, as happens onboard satellites and space probes.

  19. 78 FR 2986 - Northern Indiana Public Service Company; Notice of Application for Temporary Variance of License...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-15

    ... Variance of License Article 403 and Soliciting Comments, Motions to Intervene and Protests Take notice that... inspection: a. Application Type: Extension of temporary variance of license article 403. b. Project No: 12514... Commission to grant an extension of time to a temporary variance of license Article 403 that was granted...

  20. Analysis of Variance of Migmatite Composition II: Comparison of Two Areas.

    PubMed

    Ward, R F; Werner, S L

    1964-03-01

    To obtain comparison with previous results an analysis of variance was made on measurements of proportion of granite and country rock in a second Colorado migmatite. The distributional parameters (mean and variance) of both regions are similar, but the distributions of variance among the three levels of the nested design differ radically.