Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C
2007-01-01
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).
Monte Carlo variance reduction
NASA Technical Reports Server (NTRS)
Byrn, N. R.
1980-01-01
Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.
Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP
Edward W. Larsen
2008-06-01
The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due
Variance Reduction for a Discrete Velocity Gas
NASA Astrophysics Data System (ADS)
Morris, A. B.; Varghese, P. L.; Goldstein, D. B.
2011-05-01
We extend a variance reduction technique developed by Baker and Hadjiconstantinou [1] to a discrete velocity gas. In our previous work, the collision integral was evaluated by importance sampling of collision partners [2]. Significant computational effort may be wasted by evaluating the collision integral in regions where the flow is in equilibrium. In the current approach, substantial computational savings are obtained by only solving for the deviations from equilibrium. In the near continuum regime, the deviations from equilibrium are small and low noise evaluation of the collision integral can be achieved with very coarse statistical sampling. Spatially homogenous relaxation of the Bobylev-Krook-Wu distribution [3,4], was used as a test case to verify that the method predicts the correct evolution of a highly non-equilibrium distribution to equilibrium. When variance reduction is not used, the noise causes the entropy to undershoot, but the method with variance reduction matches the analytic curve for the same number of collisions. We then extend the work to travelling shock waves and compare the accuracy and computational savings of the variance reduction method to DSMC over Mach numbers ranging from 1.2 to 10.
Variance Reduction Using Nonreversible Langevin Samplers
NASA Astrophysics Data System (ADS)
Duncan, A. B.; Lelièvre, T.; Pavliotis, G. A.
2016-05-01
A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers [30, 37, 61, 72], introducing an appropriately chosen nonreversible component to the dynamics is beneficial, both in terms of reducing the asymptotic variance and of speeding up convergence to the target distribution. In this paper we present a detailed study of the dependence of the asymptotic variance on the deviation from reversibility. Our theoretical findings are supported by numerical simulations.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
A multicomb variance reduction scheme for Monte Carlo semiconductor simulators
Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.
1998-04-01
The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Automated variance reduction for Monte Carlo shielding analyses with MCNP
NASA Astrophysics Data System (ADS)
Radulescu, Georgeta
Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Automated Variance Reduction Applied to Nuclear Well-Logging Problems
Wagner, John C; Peplow, Douglas E.; Evans, Thomas M
2009-01-01
The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is
Automated Variance Reduction Applied to Nuclear Well-Logging Problems
Wagner, John C; Peplow, Douglas E.; Evans, Thomas M
2008-01-01
The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques
NASA Astrophysics Data System (ADS)
Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques.
Díaz-Londoño, G; García-Pareja, S; Salvat, F; Lallena, A M
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 10(5) s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs. PMID
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
MC Estimator Variance Reduction with Antithetic and Common Random Fields
NASA Astrophysics Data System (ADS)
Guthke, P.; Bardossy, A.
2011-12-01
Monte Carlo methods are widely used to estimate the outcome of complex physical models. For physical models with spatial parameter uncertainty, it is common to apply spatial random functions to the uncertain variables, which can then be used to interpolate between known values or to simulate a number of equally likely realizations .The price, that has to be paid for such a stochastic approach, are many simulations of the physical model instead of just running one model with one 'best' input parameter set. The number of simulations is often limited because of computational constraints, so that a modeller has to make a compromise between the benefit in terms of an increased accuracy of the results and the effort in terms of a massively increased computational time. Our objective is, to reduce the estimator variance of dependent variables in Monte Carlo frameworks. Therefore, we adapt two variance reduction techniques (antithetic variates and common random numbers) to a sequential random field simulation scheme that uses copulas as spatial dependence functions. The proposed methodology leads to pairs of spatial random fields with special structural properties, that are advantageous in MC frameworks. Antithetic Random fields (ARF) exhibit a reversed structure on the large scale, while the dependence on the local scale is preserved. Common random fields (CRF) show the same large scale structures, but different spatial dependence on the local scale. The performances of the proposed methods are examined with two typical applications of stochastic hydrogeology. It is shown, that ARF have the property to massively reduce the number of simulation runs required for convergence in Monte Carlo frameworks while keeping the same accuracy in terms of estimator variance. Furthermore, in multi-model frameworks like in sensitivity analysis of the spatial structure, where more than one spatial dependence model is used, the influence of different dependence structures becomes obvious
A comparison of variance reduction techniques for radar simulation
NASA Astrophysics Data System (ADS)
Divito, A.; Galati, G.; Iovino, D.
Importance sampling and extreme value technique (EVT) and its generalization (G-EVT) were compared as to reduction of the variance of radar simulation estimates. Importance sampling has a greater potential for including a priori information in the simulation experiment, and subsequently to reduce the estimation errors. This feature is paid for by a lack of generality of the simulation procedure. The EVT technique is only valid when a probability tail should be estimated (false alarm problems) and requires, as the only a priori information, that the considered variate belongs to the exponential class. The G-EVT introducing a shape parameter to be estimated (when unknown), allows smaller estimation error to be attained than EVT. The G-EVT and, to a greater extent, the EVT, lead to a straightforward and general simulation procedure for probability tails estimations.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Variance reduction in Monte Carlo analysis of rarefied gas diffusion
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion.
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.
Irreversible Langevin samplers and variance reduction: a large deviations approach
NASA Astrophysics Data System (ADS)
Rey-Bellet, Luc; Spiliopoulos, Konstantinos
2015-07-01
In order to sample from a given target distribution (often of Gibbs type), the Monte Carlo Markov chain method consists of constructing an ergodic Markov process whose invariant measure is the target distribution. By sampling the Markov process one can then compute, approximately, expectations of observables with respect to the target distribution. Often the Markov processes used in practice are time-reversible (i.e. they satisfy detailed balance), but our main goal here is to assess and quantify how the addition of a non-reversible part to the process can be used to improve the sampling properties. We focus on the diffusion setting (overdamped Langevin equations) where the drift consists of a gradient vector field as well as another drift which breaks the reversibility of the process but is chosen to preserve the Gibbs measure. In this paper we use the large deviation rate function for the empirical measure as a tool to analyze the speed of convergence to the invariant measure. We show that the addition of an irreversible drift leads to a larger rate function and it strictly improves the speed of convergence of ergodic average for (generic smooth) observables. We also deduce from this result that the asymptotic variance decreases under the addition of the irreversible drift and we give an explicit characterization of the observables whose variance is not reduced reduced, in terms of a nonlinear Poisson equation. Our theoretical results are illustrated and supplemented by numerical simulations.
Verification of the history-score moment equations for weight-window variance reduction
Solomon, Clell J; Sood, Avneet; Booth, Thomas E; Shultis, J. Kenneth
2010-12-06
The history-score moment equations that describe the moments of a Monte Carlo score distribution have been extended to weight-window variance reduction, The resulting equations have been solved deterministically to calculate the population variance of the Monte Carlo score distribution for a single tally, Results for one- and two-dimensional one-group problems are presented that predict the population variances to less than 1% deviation from the Monte Carlo for one-dimensional problems and between 1- 2% for two-dimensional problems,
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.
Clarke, Peter; Varghese, Philip; Goldstein, David
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
ADVANTG 3.0.1: AutomateD VAriaNce reducTion Generator
2015-08-17
Version 00 ADVANTG is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 V1.60 (CCC-810, not included in this distribution) based on approximate 3-D multigroup discrete ordinates adjoint transport solutions generated by Denovo (included in this distribution). The variance reduction parameters generated by ADVANTG consist of space and energy-dependent weight-window bounds and biased source distributions, which are output in formats that can be directly used with unmodified versions of MCNP5. ADVANTG has been applied to neutron, photon, and coupled neutron-photon simulations of real-world radiation detection and shielding scenarios. ADVANTG is compatible with all MCNP5 geometry features and can be used to accelerate cell tallies (F4, F6, F8), surface tallies (F1 and F2), point-detector tallies (F5), and Cartesian mesh tallies (FMESH).
ADVANTG 3.0.1: AutomateD VAriaNce reducTion Generator
Energy Science and Technology Software Center (ESTSC)
2015-08-17
Version 00 ADVANTG is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 V1.60 (CCC-810, not included in this distribution) based on approximate 3-D multigroup discrete ordinates adjoint transport solutions generated by Denovo (included in this distribution). The variance reduction parameters generated by ADVANTG consist of space and energy-dependent weight-window bounds and biased source distributions, which are output in formats that can be directly used with unmodified versionsmore » of MCNP5. ADVANTG has been applied to neutron, photon, and coupled neutron-photon simulations of real-world radiation detection and shielding scenarios. ADVANTG is compatible with all MCNP5 geometry features and can be used to accelerate cell tallies (F4, F6, F8), surface tallies (F1 and F2), point-detector tallies (F5), and Cartesian mesh tallies (FMESH).« less
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C; Murphy, Brian D; Mueller, Don
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.
NASA Astrophysics Data System (ADS)
Lai, Yongzeng; Zeng, Yan; Xi, Xiaojing
2011-11-01
In this paper, we discuss control variate methods for Asian option pricing under exponential jump diffusion model for the underlying asset prices. Numerical results show that the new control variate XNCV is much more efficient than the classical control variate XCCV when used in pricing Asian options. For example, the variance reduction ratios by XCCV are no more than 120 whereas those by XNCV vary from 15797 to 49171 on average over sample sizes 1024, 2048, 4096, 8192, 16384 and 32768.
Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.
2015-09-15
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.
Variance reduction for Fokker-Planck based particle Monte Carlo schemes
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Andric, Nemanja; Jenny, Patrick
2015-08-01
Recently, Fokker-Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1-3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker-Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker-Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick
2015-08-15
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.
NASA Astrophysics Data System (ADS)
García-Pareja, S.; Vilches, M.; Lallena, A. M.
2007-09-01
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the "hot" regions of the accelerator, an information which is basic to develop a source model for this therapy tool.
MCNPX--PoliMi Variance Reduction Techniques for Simulating Neutron Scintillation Detector Response
NASA Astrophysics Data System (ADS)
Prasad, Shikha
Scintillation detectors have emerged as a viable He-3 replacement technology in the field of nuclear nonproliferation and safeguards. The scintillation light produced in the detectors is dependent on the energy deposited and the nucleus with which the interaction occurs. For neutrons interacting with hydrogen in organic liquid scintillation detectors, the energy-to-light conversion process is nonlinear. MCNPX-PoliMi is a Monte Carlo Code that has been used for simulating this detailed scintillation physics; however, until now, simulations have only been done in analog mode. Analog Monte Carlo simulations can take long times to run, especially in the presence of shielding and large source-detector distances, as in the case of typical nonproliferation problems. In this thesis, two nonanalog approaches to speed up MCNPX-PoliMi simulations of neutron scintillation detector response have been studied. In the first approach, a response matrix method (RMM) is used to efficiently calculate neutron pulse height distributions (PHDs). This method combines the neutron current incident on the detector face with an MCNPX-PoliMi-calculated response matrix to generate PHDs. The PHD calculations and their associated uncertainty are compared for a polyethylene-shielded and lead-shielded Cf-252 source for three different techniques: fully analog MCNPX-PoliMi, the RMM, and the RMM with source biasing. The RMM with source biasing reduces computation time or increases the figure-of-merit on an average by a factor of 600 for polyethylene and 300 for lead shielding (when compared to the fully analog calculation). The simulated neutron PHDs show good agreement with the laboratory measurements, thereby validating the RMM. In the second approach, MCNPX-PoliMi simulations are performed with the aid of variance reduction techniques. This is done by separating the analog and nonanalog components of the simulations. Inside the detector region, where scintillation light is produced, no variance
Application of fuzzy sets to estimate cost savings due to variance reduction
NASA Astrophysics Data System (ADS)
Munoz, Jairo; Ostwald, Phillip F.
1993-12-01
One common assumption of models to evaluate the cost of variation is that the quality characteristic can be approximated by a standard normal distribution. Such an assumption is invalid for three important cases: (a) when the random variable is always positive, (b) when manual intervention distorts random variation, and (c) when the variable of interest is evaluated by linguistic terms. This paper applies the Weibull distribution to address nonnormal situations and fuzzy logic theory to study the case of quality evaluated via lexical terms. The approach concentrates on the cost incurred by inspection to formulate a probabilistic-possibilistic model that determines cost savings due to variance reduction. The model is tested with actual data from a manual TIG welding process.
Comparison of hybrid methods for global variance reduction in shielding calculations
Peplow, D. E.
2013-07-01
For Monte Carlo shielding problems that calculate a mesh tally over the entire problem, the statistical uncertainties computed for each voxel can vary widely. This can lead to unacceptably long run times in order to reduce the uncertainties in all areas of the problem to a reasonably low level. Hybrid methods - using estimates from deterministic calculations to create importance maps for variance reduction in Monte Carlo calculations - have been successfully used to optimize the calculation of specific tallies. For the global problem, several methods have been proposed to create importance maps that distribute Monte Carlo particles in such a way as to achieve a more uniform distribution of relative uncertainty across the problem. The goal is to compute a mesh tally with nearly the same relative uncertainties in the low flux/dose areas as in the high flux/dose areas. Methods based on only forward deterministic estimates and methods using both forward and adjoint deterministic methods have been implemented in the SCALE/MAVRIC package and have been compared against each other by computing global mesh tallies on several representative shielding problems. Methods using both forward and adjoint estimates provide better performance for computing more uniform relative uncertainties across a global mesh tally. (authors)
Somasundaram, E.; Palmer, T. S.
2013-07-01
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
Hybrid mesh generation using advancing reduction technique
Technology Transfer Automated Retrieval System (TEKTRAN)
This study presents an extension of the application of the advancing reduction technique to the hybrid mesh generation. The proposed algorithm is based on a pre-generated rectangle mesh (RM) with a certain orientation. The intersection points between the two sets of perpendicular mesh lines in RM an...
Advanced CO2 Removal and Reduction System
NASA Technical Reports Server (NTRS)
Alptekin, Gokhan; Dubovik, Margarita; Copeland, Robert J.
2011-01-01
An advanced system for removing CO2 and H2O from cabin air, reducing the CO2, and returning the resulting O2 to the air is less massive than is a prior system that includes two assemblies . one for removal and one for reduction. Also, in this system, unlike in the prior system, there is no need to compress and temporarily store CO2. In this present system, removal and reduction take place within a single assembly, wherein removal is effected by use of an alkali sorbent and reduction is effected using a supply of H2 and Ru catalyst, by means of the Sabatier reaction, which is CO2 + 4H2 CH4 + O2. The assembly contains two fixed-bed reactors operating in alternation: At first, air is blown through the first bed, which absorbs CO2 and H2O. Once the first bed is saturated with CO2 and H2O, the flow of air is diverted through the second bed and the first bed is regenerated by supplying it with H2 for the Sabatier reaction. Initially, the H2 is heated to provide heat for the regeneration reaction, which is endothermic. In the later stages of regeneration, the Sabatier reaction, which is exothermic, supplies the heat for regeneration.
Milias-Argeitis, Andreas Khammash, Mustafa; Lygeros, John
2014-07-14
We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.
NASA Astrophysics Data System (ADS)
Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa
2014-07-01
We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.
Sampson, Andrew; Le Yi; Williamson, Jeffrey F.
2012-02-15
heterogeneous doses. On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 x 1 x 1 mm{sup 3}) and breast (0.67 x 0.67 x 0.8 mm{sup 3}) CTVs, respectively. Conclusions: CMC supports an additional average 38-60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study.
NASA Astrophysics Data System (ADS)
García-Pareja, S.; Vilches, M.; Lallena, A. M.
2010-01-01
The Monte Carlo simulation of clinical electron linear accelerators requires large computation times to achieve the level of uncertainty required for radiotherapy. In this context, variance reduction techniques play a fundamental role in the reduction of this computational time. Here we describe the use of the ant colony method to control the application of two variance reduction techniques: Splitting and Russian roulette. The approach can be applied to any accelerator in a straightforward way and permits the increasing of the efficiency of the simulation by a factor larger than 50.
Evaluation of the Advanced Subsonic Technology Program Noise Reduction Benefits
NASA Technical Reports Server (NTRS)
Golub, Robert A.; Rawls, John W., Jr.; Russell, James W.
2005-01-01
This report presents a detailed evaluation of the aircraft noise reduction technology concepts developed during the course of the NASA/FAA Advanced Subsonic Technology (AST) Noise Reduction Program. In 1992, NASA and the FAA initiated a cosponsored, multi-year program with the U.S. aircraft industry focused on achieving significant advances in aircraft noise reduction. The program achieved success through a systematic development and validation of noise reduction technology. Using the NASA Aircraft Noise Prediction Program, the noise reduction benefit of the technologies that reached a NASA technology readiness level of 5 or 6 were applied to each of four classes of aircraft which included a large four engine aircraft, a large twin engine aircraft, a small twin engine aircraft and a business jet. Total aircraft noise reductions resulting from the implementation of the appropriate technologies for each class of aircraft are presented and compared to the AST program goals.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.
NASA Astrophysics Data System (ADS)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Oxidation-Reduction Resistance of Advanced Copper Alloys
NASA Technical Reports Server (NTRS)
Greenbauer-Seng, L. (Technical Monitor); Thomas-Ogbuji, L.; Humphrey, D. L.; Setlock, J. A.
2003-01-01
Resistance to oxidation and blanching is a key issue for advanced copper alloys under development for NASA's next generation of reusable launch vehicles. Candidate alloys, including dispersion-strengthened Cu-Cr-Nb, solution-strengthened Cu-Ag-Zr, and ODS Cu-Al2O3, are being evaluated for oxidation resistance by static TGA exposures in low-p(O2) and cyclic oxidation in air, and by cyclic oxidation-reduction exposures (using air for oxidation and CO/CO2 or H2/Ar for reduction) to simulate expected service environments. The test protocol and results are presented.
20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION ...
20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION FACILITY USED TO CUT PLUTONIUM CONTAMINATED GLOVE BOXES AND MISCELLANEOUS LARGE EQUIPMENT DOWN TO AN EASILY PACKAGED SIZE FOR DISPOSAL. ROUTINE OPERATIONS WERE PERFORMED REMOTELY, USING HOISTS, MANIPULATOR ARMS, AND GLOVE PORTS TO REDUCE BOTH INTENSITY AND TIME OF RADIATION EXPOSURE TO THE OPERATOR. (11/6/86) - Rocky Flats Plant, Plutonium Fabrication, Central section of Plant, Golden, Jefferson County, CO
ERIC Educational Resources Information Center
Gee, Jerry Brooksher
A common belief among teacher educators is that different academic backgrounds may influence student entry level and rates of matriculation through the curriculum. This report describes a study using a "pretest/posttest" method to evaluate student academic progression, and to determine variance in scores between two groups of graduate students…
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Lawrence, Roland W.
2000-01-01
As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.
Fluid Mechanics, Drag Reduction and Advanced Configuration Aeronautics
NASA Technical Reports Server (NTRS)
Bushnell, Dennis M.
2000-01-01
This paper discusses Advanced Aircraft configurational approaches across the speed range, which are either enabled, or greatly enhanced, by clever Flow Control. Configurations considered include Channel Wings with circulation control for VTOL (but non-hovering) operation with high cruise speed, strut-braced CTOL transports with wingtip engines and extensive ('natural') laminar flow control, a midwing double fuselage CTOL approach utilizing several synergistic methods for drag-due-to-lift reduction, a supersonic strut-braced configuration with order of twice the L/D of current approaches and a very advanced, highly engine flow-path-integrated hypersonic cruise machine. This paper indicates both the promise of synergistic flow control approaches as enablers for 'Revolutions' in aircraft performance and fluid mechanic 'areas of ignorance' which impede their realization and provide 'target-rich' opportunities for Fluids Research.
Recent Advances in Electrocatalysts for Oxygen Reduction Reaction.
Shao, Minhua; Chang, Qiaowan; Dodelet, Jean-Pol; Chenitz, Regis
2016-03-23
The recent advances in electrocatalysis for oxygen reduction reaction (ORR) for proton exchange membrane fuel cells (PEMFCs) are thoroughly reviewed. This comprehensive Review focuses on the low- and non-platinum electrocatalysts including advanced platinum alloys, core-shell structures, palladium-based catalysts, metal oxides and chalcogenides, carbon-based non-noble metal catalysts, and metal-free catalysts. The recent development of ORR electrocatalysts with novel structures and compositions is highlighted. The understandings of the correlation between the activity and the shape, size, composition, and synthesis method are summarized. For the carbon-based materials, their performance and stability in fuel cells and comparisons with those of platinum are documented. The research directions as well as perspectives on the further development of more active and less expensive electrocatalysts are provided. PMID:26886420
Recent advances in the kinetics of oxygen reduction
Adzic, R.
1996-07-01
Oxygen reduction is considered an important electrocatalytic reaction; the most notable need remains improvement of the catalytic activity of existing metal electrocatalysts and development of new ones. A review is given of new advances in the understanding of reaction kinetics and improvements of the electrocatalytic properties of some surfaces, with focus on recent studies of relationship of the surface properties to its activity and reaction kinetics. The urgent need is to improve catalytic activity of Pt and synthesize new, possibly non- noble metal catalysts. New experimental techniques for obtaining new level of information include various {ital in situ} spectroscopies and scanning probes, some involving synchrotron radiation. 138 refs, 18 figs, 2 tabs.
Lung volume reduction for advanced emphysema: surgical and bronchoscopic approaches.
Tidwell, Sherry L; Westfall, Elizabeth; Dransfield, Mark T
2012-01-01
Chronic obstructive pulmonary disease is the third leading cause of death in the United States, affecting more than 24 million people. Inhaled bronchodilators are the mainstay of therapy; they improve symptoms and quality of life and reduce exacerbations. These and smoking cessation and long-term oxygen therapy for hypoxemic patients are the only medical treatments definitively demonstrated to reduce mortality. Surgical approaches include lung transplantation and lung volume reduction and the latter has been shown to improve exercise tolerance, quality of life, and survival in highly selected patients with advanced emphysema. Lung volume reduction surgery results in clinical benefits. The procedure is associated with a short-term risk of mortality and a more significant risk of cardiac and pulmonary perioperative complications. Interest has been growing in the use of noninvasive, bronchoscopic methods to address the pathological hyperinflation that drives the dyspnea and exercise intolerance that is characteristic of emphysema. In this review, the mechanism by which lung volume reduction improves pulmonary function is outlined, along with the risks and benefits of the traditional surgical approach. In addition, the emerging bronchoscopic techniques for lung volume reduction are introduced and recent clinical trials examining their efficacy are summarized. PMID:22189668
Potential for Landing Gear Noise Reduction on Advanced Aircraft Configurations
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Nickol, Craig L.; Burley, Casey L.; Guo, Yueping
2016-01-01
The potential of significantly reducing aircraft landing gear noise is explored for aircraft configurations with engines installed above the wings or the fuselage. An innovative concept is studied that does not alter the main gear assembly itself but does shorten the main strut and integrates the gear in pods whose interior surfaces are treated with acoustic liner. The concept is meant to achieve maximum noise reduction so that main landing gears can be eliminated as a major source of airframe noise. By applying this concept to an aircraft configuration with 2025 entry-into-service technology levels, it is shown that compared to noise levels of current technology, the main gear noise can be reduced by 10 EPNL dB, bringing the main gear noise close to a floor established by other components such as the nose gear. The assessment of the noise reduction potential accounts for design features for the advanced aircraft configuration and includes the effects of local flow velocity in and around the pods, gear noise reflection from the airframe, and reflection and attenuation from acoustic liner treatment on pod surfaces and doors. A technical roadmap for maturing this concept is discussed, and the possible drag increase at cruise due to the addition of the pods is identified as a challenge, which needs to be quantified and minimized possibly with the combination of detailed design and application of drag reduction technologies.
Advancing Development and Greenhouse Gas Reductions in Vietnam's Wind Sector
Bilello, D.; Katz, J.; Esterly, S.; Ogonowski, M.
2014-09-01
Clean energy development is a key component of Vietnam's Green Growth Strategy, which establishes a target to reduce greenhouse gas (GHG) emissions from domestic energy activities by 20-30 percent by 2030 relative to a business-as-usual scenario. Vietnam has significant wind energy resources, which, if developed, could help the country reach this target while providing ancillary economic, social, and environmental benefits. Given Vietnam's ambitious clean energy goals and the relatively nascent state of wind energy development in the country, this paper seeks to fulfill two primary objectives: to distill timely and useful information to provincial-level planners, analysts, and project developers as they evaluate opportunities to develop local wind resources; and, to provide insights to policymakers on how coordinated efforts may help advance large-scale wind development, deliver near-term GHG emission reductions, and promote national objectives in the context of a low emission development framework.
Virus Reduction during Advanced Bardenpho and Conventional Wastewater Treatment Processes.
Schmitz, Bradley W; Kitajima, Masaaki; Campillo, Maria E; Gerba, Charles P; Pepper, Ian L
2016-09-01
The present study investigated wastewater treatment for the removal of 11 different virus types (pepper mild mottle virus; Aichi virus; genogroup I, II, and IV noroviruses; enterovirus; sapovirus; group-A rotavirus; adenovirus; and JC and BK polyomaviruses) by two wastewater treatment facilities utilizing advanced Bardenpho technology and compared the results with conventional treatment processes. To our knowledge, this is the first study comparing full-scale treatment processes that all received sewage influent from the same region. The incidence of viruses in wastewater was assessed with respect to absolute abundance, occurrence, and reduction in monthly samples collected throughout a 12 month period in southern Arizona. Samples were concentrated via an electronegative filter method and quantified using TaqMan-based quantitative polymerase chain reaction (qPCR). Results suggest that Plant D, utilizing an advanced Bardenpho process as secondary treatment, effectively reduced pathogenic viruses better than facilities using conventional processes. However, the absence of cell-culture assays did not allow an accurate assessment of infective viruses. On the basis of these data, the Aichi virus is suggested as a conservative viral marker for adequate wastewater treatment, as it most often showed the best correlation coefficients to viral pathogens, was always detected at higher concentrations, and may overestimate the potential virus risk. PMID:27447291
Low cost biological lung volume reduction therapy for advanced emphysema
Bakeer, Mostafa; Abdelgawad, Taha Taha; El-Metwaly, Raed; El-Morsi, Ahmed; El-Badrawy, Mohammad Khairy; El-Sharawy, Solafa
2016-01-01
Background Bronchoscopic lung volume reduction (BLVR), using biological agents, is one of the new alternatives to lung volume reduction surgery. Objectives To evaluate efficacy and safety of biological BLVR using low cost agents including autologous blood and fibrin glue. Methods Enrolled patients were divided into two groups: group A (seven patients) in which autologous blood was used and group B (eight patients) in which fibrin glue was used. The agents were injected through a triple lumen balloon catheter via fiberoptic bronchoscope. Changes in high resolution computerized tomography (HRCT) volumetry, pulmonary function tests, symptoms, and exercise capacity were evaluated at 12 weeks postprocedure as well as for complications. Results In group A, at 12 weeks postprocedure, there was significant improvement in the mean value of HRCT volumetry and residual volume/total lung capacity (% predicted) (P-value: <0.001 and 0.038, respectively). In group B, there was significant improvement in the mean value of HRCT volumetry and (residual volume/total lung capacity % predicted) (P-value: 0.005 and 0.004, respectively). All patients tolerated the procedure with no mortality. Conclusion BLVR using autologous blood and locally prepared fibrin glue is a promising method for therapy of advanced emphysema in term of efficacy, safety as well as cost effectiveness. PMID:27536091
Advances in volcano monitoring and risk reduction in Latin America
NASA Astrophysics Data System (ADS)
McCausland, W. A.; White, R. A.; Lockhart, A. B.; Marso, J. N.; Assitance Program, V. D.; Volcano Observatories, L. A.
2014-12-01
We describe results of cooperative work that advanced volcanic monitoring and risk reduction. The USGS-USAID Volcano Disaster Assistance Program (VDAP) was initiated in 1986 after disastrous lahars during the 1985 eruption of Nevado del Ruiz dramatizedthe need to advance international capabilities in volcanic monitoring, eruption forecasting and hazard communication. For the past 28 years, VDAP has worked with our partners to improve observatories, strengthen monitoring networks, and train observatory personnel. We highlight a few of the many accomplishments by Latin American volcano observatories. Advances in monitoring, assessment and communication, and lessons learned from the lahars of the 1985 Nevado del Ruiz eruption and the 1994 Paez earthquake enabled the Servicio Geológico Colombiano to issue timely, life-saving warnings for 3 large syn-eruptive lahars at Nevado del Huila in 2007 and 2008. In Chile, the 2008 eruption of Chaitén prompted SERNAGEOMIN to complete a national volcanic vulnerability assessment that led to a major increase in volcano monitoring. Throughout Latin America improved seismic networks now telemeter data to observatories where the decades-long background rates and types of seismicity have been characterized at over 50 volcanoes. Standardization of the Earthworm data acquisition system has enabled data sharing across international boundaries, of paramount importance during both regional tectonic earthquakes and during volcanic crises when vulnerabilities cross international borders. Sharing of seismic forecasting methods led to the formation of the international organization of Latin American Volcano Seismologists (LAVAS). LAVAS courses and other VDAP training sessions have led to international sharing of methods to forecast eruptions through recognition of precursors and to reduce vulnerabilities from all volcano hazards (flows, falls, surges, gas) through hazard assessment, mapping and modeling. Satellite remote sensing data
Advanced Reduction Processes: A New Class of Treatment Processes
Vellanki, Bhanu Prakash; Batchelor, Bill; Abdel-Wahab, Ahmed
2013-01-01
Abstract A new class of treatment processes called advanced reduction processes (ARPs) is proposed. ARPs combine activation methods and reducing agents to form highly reactive reducing radicals that degrade oxidized contaminants. Batch screening experiments were conducted to identify effective ARPs by applying several combinations of activation methods (ultraviolet light, ultrasound, electron beam, and microwaves) and reducing agents (dithionite, sulfite, ferrous iron, and sulfide) to degradation of four target contaminants (perchlorate, nitrate, perfluorooctanoic acid, and 2,4 dichlorophenol) at three pH-levels (2.4, 7.0, and 11.2). These experiments identified the combination of sulfite activated by ultraviolet light produced by a low-pressure mercury vapor lamp (UV-L) as an effective ARP. More detailed kinetic experiments were conducted with nitrate and perchlorate as target compounds, and nitrate was found to degrade more rapidly than perchlorate. Effectiveness of the UV-L/sulfite treatment process improved with increasing pH for both perchlorate and nitrate. We present the theory behind ARPs, identify potential ARPs, demonstrate their effectiveness against a wide range of contaminants, and provide basic experimental evidence in support of the fundamental hypothesis for ARP, namely, that activation methods can be applied to reductants to form reducing radicals that degrade oxidized contaminants. This article provides an introduction to ARPs along with sufficient data to identify potentially effective ARPs and the target compounds these ARPs will be most effective in destroying. Further research will provide a detailed analysis of degradation kinetics and the mechanisms of contaminant destruction in an ARP. PMID:23840160
Leamy, Larry J; Elo, Kari; Nielsen, Merlyn K; Van Vleck, L Dale; Pomp, Daniel
2005-01-01
We estimated heritabilities and genetic correlations for a suite of 15 characters in five functional groups in an advanced intercross population of over 2000 mice derived from a cross of inbred lines selected for high and low heat loss. Heritabilities averaged 0.56 for three body weights, 0.23 for two energy balance characters, 0.48 for three bone characters, 0.35 for four measures of adiposity, and 0.27 for three organ weights, all of which were generally consistent in magnitude with estimates derived in previous studies. Genetic correlations varied from -0.65 to +0.98, and were higher within these functional groups than between groups. These correlations generally conformed to a priori expectations, being positive in sign for energy expenditure and consumption (+0.24) and negative in sign for energy expenditure and adiposity (-0.17). The genetic correlations of adiposity with body weight at 3, 6, and 12 weeks of age (-0.29, -0.22, -0.26) all were negative in sign but not statistically significant. The independence of body weight and adiposity suggests that this advanced intercross population is ideal for a comprehensive discovery of genes controlling regulation of mammalian adiposity that are distinct from those for body weight. PMID:16194522
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; Craig, Kellie D.
2011-01-01
The intent of the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort is to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Key Concepts (1) Offerors must propose an Advanced Booster concept that meets SLS Program requirements (2) Engineering Demonstration and/or Risk Reduction must relate to the Offeror s Advanced Booster concept (3) NASA Research Announcement (NRA) will not be prescriptive in defining Engineering Demonstration and/or Risk Reduction
Advanced supersonic propulsion study. [with emphasis on noise level reduction
NASA Technical Reports Server (NTRS)
Sabatella, J. A. (Editor)
1974-01-01
A study was conducted to determine the promising propulsion systems for advanced supersonic transport application, and to identify the critical propulsion technology requirements. It is shown that noise constraints have a major effect on the selection of the various engine types and cycle parameters. Several promising advanced propulsion systems were identified which show the potential of achieving lower levels of sideline jet noise than the first generation supersonic transport systems. The non-afterburning turbojet engine, utilizing a very high level of jet suppression, shows the potential to achieve FAR 36 noise level. The duct-heating turbofan with a low level of jet suppression is the most attractive engine for noise levels from FAR 36 to FAR 36 minus 5 EPNdb, and some series/parallel variable cycle engines show the potential of achieving noise levels down to FAR 36 minus 10 EPNdb with moderate additional penalty. The study also shows that an advanced supersonic commercial transport would benefit appreciably from advanced propulsion technology. The critical propulsion technology needed for a viable supersonic propulsion system, and the required specific propulsion technology programs are outlined.
Advances in reduction techniques for tire contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1995-01-01
Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.
Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells
NASA Astrophysics Data System (ADS)
Ali, Mohamed Mahmoud; Kvande, Halvor
2016-06-01
Development of an advanced Sabatier CO2 reduction subsystem
NASA Technical Reports Server (NTRS)
Kleiner, G. N.; Cusick, R. J.
1981-01-01
A preprototype Sabatier CO2 reduction subsystem was successfully designed, fabricated and tested. The lightweight, quick starting (less than 5 minutes) reactor utlizes a highly active and physically durable methanation catalyst composed of ruthenium on alumina. The use of this improved catalyst permits a simple, passively controlled reactor design with an average lean component H2/CO2 conversion efficiency of over 99% over a range of H2/CO2 molar ratios of 1.8 to 5 while operating with process flows equivalent to a crew size of up to five persons. The subsystem requires no heater operation after start-up even during simulated 55 minute lightside/39 minute darkside orbital operation.
Lung volume reduction therapies for advanced emphysema: an update.
Berger, Robert L; Decamp, Malcolm M; Criner, Gerard J; Celli, Bartolome R
2010-08-01
Observational and randomized studies provide convincing evidence that lung volume reduction surgery (LVRS) improves symptoms, lung function, exercise tolerance, and life span in well-defined subsets of patients with emphysema. Yet, in the face of an estimated 3 million patients with emphysema in the United States, < 15 LVRS operations are performed monthly under the aegis of Medicare, in part because of misleading reporting in lay and medical publications suggesting that the operation is associated with prohibitive risks and offers minimal benefits. Thus, a treatment with proven potential for palliating and prolonging life may be underutilized. In an attempt to lower risks and cost, several bronchoscopic strategies (bronchoscopic emphysema treatment [BET]) to reduce lung volume have been introduced. The following three methods have been tested in some depth: (1) unidirectional valves that allow exit but bar entry of gas to collapse targeted hyperinflated portions of the lung and reduce overall volume; (2) biologic lung volume reduction (BioLVR) that involves intrabronchial administration of a biocompatible complex to collapse, inflame, scar, and shrink the targeted emphysematous lung; and (3) airway bypass tract (ABT) or creation of stented nonanatomic pathways between hyperinflated pulmonary parenchyma and bronchial tree to decompress and reduce the volume of oversized lung. The results of pilot and randomized pivotal clinical trials suggest that the bronchoscopic strategies are associated with lower mortality and morbidity but are also less efficient than LVRS. Most bronchoscopic approaches improve quality-of-life measures without supportive physiologic or exercise tolerance benefits. Although there is promise of limited therapeutic influence, the available information is not sufficient to recommend use of bronchoscopic strategies for treating emphysema. PMID:20682529
NASA Technical Reports Server (NTRS)
Byrne, Vicky; Orndoff, Evelyne; Poritz, Darwin; Schlesinger, Thilini
2013-01-01
All human space missions require significant logistical mass and volume that will become an excessive burden for long duration missions beyond low Earth orbit. The goal of the Advanced Exploration Systems (AES) Logistics Reduction & Repurposing (LRR) project is to bring new ideas and technologies that will enable human presence in farther regions of space. The LRR project has five tasks: 1) Advanced Clothing System (ACS) to reduce clothing mass and volume, 2) Logistics to Living (L2L) to repurpose existing cargo, 3) Heat Melt Compactor (HMC) to reprocess materials in space, 4) Trash to Gas (TTG) to extract useful gases from trash, and 5) Systems Engineering and Integration (SE&I) to integrate these logistical components. The current International Space Station (ISS) crew wardrobe has already evolved not only to reduce some of the logistical burden but also to address crew preference. The ACS task is to find ways to further reduce this logistical burden while examining human response to different types of clothes. The ACS task has been broken into a series of studies on length of wear of various garments: 1) three small studies conducted through other NASA projects (MMSEV, DSH, HI-SEAS) focusing on length of wear of garments treated with an antimicrobial finish; 2) a ground study, which is the subject of this report, addressing both length of wear and subject perception of various types of garments worn during aerobic exercise; and 3) an ISS study replicating the ground study, and including every day clothing to collect information on perception in reduced gravity in which humans experience physiological changes. The goal of the ground study is first to measure how long people can wear the same exercise garment, depending on the type of fabric and the presence of antimicrobial treatment, and second to learn why. Human factors considerations included in the study consist of the Institutional Review Board approval, test protocol and participants' training, and a web
Wang, Zhi-hua; Zhou, Jun-hu; Zhang, Yan-wei; Lu, Zhi-min; Fan, Jian-ren; Cen, Ke-fa
2005-01-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 °C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 °C~1100 °C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503
NASA Technical Reports Server (NTRS)
Kirsch, Paul J.; Hayes, Jane; Zelinski, Lillian
2000-01-01
This special case study report presents the Science and Engineering Technical Assessments (SETA) team's findings for exploring the correlation between the underlying models of Advanced Risk Reduction Tool (ARRT) relative to how it identifies, estimates, and integrates Independent Verification & Validation (IV&V) activities. The special case study was conducted under the provisions of SETA Contract Task Order (CTO) 15 and the approved technical approach documented in the CTO-15 Modification #1 Task Project Plan.
NASA's Space Launch System Advanced Booster Engineering Demonstration and Risk Reduction Efforts
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; May, Todd; Dumbacher, Daniel
2012-01-01
The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, and its stated intent was to reduce risks leading to an affordable Advanced Booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the Advanced Boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the Advanced Boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit, opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable Advanced Booster that meets the SLS performance requirements. Demonstrations and
NASA's Space Launch System Advanced Booster Engineering Demonstration and/or Risk Reduction Efforts
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; Dumbacher, Daniel L.; May, Todd A.
2012-01-01
The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters (SRBs) for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, with a stated intent to reduce risks leading to an affordable advanced booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the advanced boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the advanced boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit (BEO), opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable advanced booster that meets the SLS performance requirements
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is
Zayas Pérez, Teresa; Geissler, Gunther; Hernandez, Fernando
2007-01-01
The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculation and advanced oxidation processes (AOP) had been studied. The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H2O2, UV/O3 and UV/H2O2/O3 processes was determined under acidic conditions. For each of these processes, different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater. Coffee wastewater is characterized by a high chemical oxygen demand (COD) and low total suspended solids. The outcomes of coffee wastewater treatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD, color, and turbidity. It was found that a reduction in COD of 67% could be realized when the coffee wastewater was treated by chemical coagulation-flocculation with lime and coagulant T-1. When coffee wastewater was treated by coagulation-flocculation in combination with UV/H2O2, a COD reduction of 86% was achieved, although only after prolonged UV irradiation. Of the three advanced oxidation processes considered, UV/H2O2, UV/O3 and UV/H2O2/O3, we found that the treatment with UV/H2O2/O3 was the most effective, with an efficiency of color, turbidity and further COD removal of 87%, when applied to the flocculated coffee wastewater. PMID:17918591
Getting around cosmic variance
Kamionkowski, M.; Loeb, A.
1997-10-01
Cosmic microwave background (CMB) anisotropies probe the primordial density field at the edge of the observable Universe. There is a limiting precision ({open_quotes}cosmic variance{close_quotes}) with which anisotropies can determine the amplitude of primordial mass fluctuations. This arises because the surface of last scatter (SLS) probes only a finite two-dimensional slice of the Universe. Probing other SLS{close_quote}s observed from different locations in the Universe would reduce the cosmic variance. In particular, the polarization of CMB photons scattered by the electron gas in a cluster of galaxies provides a measurement of the CMB quadrupole moment seen by the cluster. Therefore, CMB polarization measurements toward many clusters would probe the anisotropy on a variety of SLS{close_quote}s within the observable Universe, and hence reduce the cosmic-variance uncertainty. {copyright} {ital 1997} {ital The American Physical Society}
Variance Anisotropy in Kinetic Plasmas
NASA Astrophysics Data System (ADS)
Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping
2016-06-01
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.
Recent Advances in Inorganic Heterogeneous Electrocatalysts for Reduction of Carbon Dioxide.
Zhu, Dong Dong; Liu, Jin Long; Qiao, Shi Zhang
2016-05-01
In view of the climate changes caused by the continuously rising levels of atmospheric CO2 , advanced technologies associated with CO2 conversion are highly desirable. In recent decades, electrochemical reduction of CO2 has been extensively studied since it can reduce CO2 to value-added chemicals and fuels. Considering the sluggish reaction kinetics of the CO2 molecule, efficient and robust electrocatalysts are required to promote this conversion reaction. Here, recent progress and opportunities in inorganic heterogeneous electrocatalysts for CO2 reduction are discussed, from the viewpoint of both experimental and computational aspects. Based on elemental composition, the inorganic catalysts presented here are classified into four groups: metals, transition-metal oxides, transition-metal chalcogenides, and carbon-based materials. However, despite encouraging accomplishments made in this area, substantial advances in CO2 electrolysis are still needed to meet the criteria for practical applications. Therefore, in the last part, several promising strategies, including surface engineering, chemical modification, nanostructured catalysts, and composite materials, are proposed to facilitate the future development of CO2 electroreduction. PMID:26996295
ERIC Educational Resources Information Center
Braun, W. John
2012-01-01
The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System
NASA Technical Reports Server (NTRS)
Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.
2015-01-01
The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the
Moussavi, Gholamreza; Shekoohiyan, Sakine
2016-11-15
This work was aimed at investigating the performance of the continuous-flow VUV photoreactor as a novel chemical-less advanced process for simultaneously oxidizing acetaminophen (ACT) as a model of pharmaceuticals and reducing nitrate in a single reactor. Solution pH was an important parameter affecting the performance of VUV; the highest ACT oxidation and nitrate reduction attained at solution pH between 6 and 8. The ACT was oxidized mainly by HO while the aqueous electrons were the main working agents in the reduction of nitrate. The performance of VUV photoreactor improved with the increase of hydraulic retention time (HRT); the complete degradation of ACT and ∼99% reduction of nitrate with 100% N2 selectivity achieved at HRT of 80min. The VUV effluent concentrations of nitrite and ammonium at HRT of 80min were below the drinking water standards. The real water sample contaminated with the ACT and nitrate was efficiently treated in the VUV photoreactor. Therefore, the VUV photoreactor is a chemical-less advanced process in which both advanced oxidation and advanced reduction reactions are accomplished. This unique feature possesses VUV photoreactor as a promising method of treating water contaminated with both pharmaceutical and nitrate. PMID:27434736
Spectral Ambiguity of Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Nuclear Material Variance Calculation
Energy Science and Technology Software Center (ESTSC)
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Biclustering with heterogeneous variance.
Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R
2013-07-23
In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637
DEMONSTRATION OF AN ADVANCED INTEGRATED CONTROL SYSTEM FOR SIMULTANEOUS EMISSIONS REDUCTION
Suzanne Shea; Randhir Sehgal; Ilga Celmins; Andrew Maxson
2002-02-01
The primary objective of the project titled ''Demonstration of an Advanced Integrated Control System for Simultaneous Emissions Reduction'' was to demonstrate at proof-of-concept scale the use of an online software package, the ''Plant Environmental and Cost Optimization System'' (PECOS), to optimize the operation of coal-fired power plants by economically controlling all emissions simultaneously. It combines physical models, neural networks, and fuzzy logic control to provide both optimal least-cost boiler setpoints to the boiler operators in the control room, as well as optimal coal blending recommendations designed to reduce fuel costs and fuel-related derates. The goal of the project was to demonstrate that use of PECOS would enable coal-fired power plants to make more economic use of U.S. coals while reducing emissions.
Recent advancements in Pt and Pt-free catalysts for oxygen reduction reaction.
Nie, Yao; Li, Li; Wei, Zidong
2015-04-21
Developing highly efficient catalysts for the oxygen reduction reaction (ORR) is key to the fabrication of commercially viable fuel cell devices and metal-air batteries for future energy applications. Herein, we review the most recent advances in the development of Pt-based and Pt-free materials in the field of fuel cell ORR catalysis. This review covers catalyst material selection, design, synthesis, and characterization, as well as the theoretical understanding of the catalysis process and mechanisms. The integration of these catalysts into fuel cell operations and the resulting performance/durability are also discussed. Finally, we provide insights into the remaining challenges and directions for future perspectives and research. PMID:25652755
Noise Reduction Potential of Large, Over-the-Wing Mounted, Advanced Turbofan Engines
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2000-01-01
As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Indeed, the noise goal for NASA's Aeronautics Enterprise calls for technologies that will help to provide a 20 EPNdB reduction relative to today's levels by the year 2022. Further, the large fan diameters of modem, increasingly higher bypass ratio engines pose a significant packaging and aircraft installation challenge. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large, ultra high bypass ratio cycles to continue, this over-the-wing design is believed to offer noise shielding benefits to observers on the ground. This paper describes the analytical certification noise predictions of a notional, long haul, commercial quadjet transport with advanced, high bypass engines mounted above the wing.
Briggs, J. L.; Younger, A. F.
1980-06-02
A materials selection test program was conducted to characterize optimum interior surface coatings for an advanced size reduction facility. The equipment to be processed by this facility consists of stainless steel apparatus (e.g., glove boxes, piping, and tanks) used for the chemical recovery of plutonium. Test results showed that a primary requirement for a satisfactory coating is ease of decontamination. A closely related concern is the resistance of paint films to nitric acid - plutonium environments. A vinyl copolymer base paint was the only coating, of eight paints tested, with properties that permitted satisfactory decontamination of plutonium and also performed equal to or better than the other paints in the chemical resistance, radiation stability, and impact tests.
NASA Technical Reports Server (NTRS)
Saiyed, Naseem H.
2000-01-01
Contents of this presentation include: Advanced Subsonic Technology (AST) goals and general information; Nozzle nomenclature; Nozzle schematics; Photograph of all baselines; Configurations tests and types of data acquired; and Engine cycle and plug geometry impact on EPNL.
NASA Technical Reports Server (NTRS)
Braslow, A. L.; Whitehead, A. H., Jr.
1973-01-01
The anticipated growth of air transportation is in danger of being constrained by increased prices and insecure sources of petroleum-based fuel. Fuel-conservation possibilities attainable through the application of advances in aeronautical technology to aircraft design are identified with the intent of stimulating NASA R and T and systems-study activities in the various disciplinary areas. The material includes drag reduction; weight reduction; increased efficiency of main and auxiliary power systems; unconventional air transport of cargo; and operational changes.
Spectral variance of aeroacoustic data
NASA Technical Reports Server (NTRS)
Rao, K. V.; Preisser, J. S.
1981-01-01
An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.
Zhang, Yingying; Zhuang, Yao; Geng, Jinju; Ren, Hongqiang; Xu, Ke; Ding, Lili
2016-04-15
This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe(2+)/H2O2) and UV/H2O2 process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H2O2 process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H2O2 method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe(2+)/H2O2 had a molar ratio of 0.1 and a H2O2 concentration of 0.01molL(-1) with a pH of 3.0 and reaction time of 2h, 2.58-3.79 logs of target genes were removed. Under the initial effluent pH condition (pH=7.0), the removal was 2.26-3.35 logs. For the UV/H2O2 process, when the pH was 3.5 with a H2O2 concentration of 0.01molL(-1) accompanied by 30min of UV irradiation, all ARGs could achieve a reduction of 2.8-3.5 logs, and 1.55-2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H2O2 process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe(2+)/H2O2 molar ratios, H2O2 concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs. PMID:26815295
NASA Technical Reports Server (NTRS)
Goodall, R. G.; Painter, G. W.
1975-01-01
Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.
Pan, Fuping; Jin, Jutao; Fu, Xiaogang; Liu, Qiao; Zhang, Junyan
2013-11-13
Designing and fabricating advanced oxygen reduction reaction (ORR) electrocatalysts is critical importance for the sake of promoting widespread application of fuel cells. In this work, we report that nitrogen-doped graphene (NG), synthesized via one-step pyrolysis of naturally available sugar in the presence of urea, can serve as metal-free ORR catalyst with excellent electrocatalytic activity, outstanding methanol crossover resistance as well as long-term operation stability in alkaline medium. The resultant NG1000 (annealed at 1000 °C) exhibits a high kinetic current density of 21.33 mA/cm(2) at -0.25 V (vs Ag/AgCl) in O2-saturated 0.1 M KOH electrolyte, compared with 16.01 mA/cm(2) at -0.25 V for commercial 20 wt % Pt/C catalyst. Notably, the NG1000 possesses comparable ORR half-wave potential to Pt/C. The effects of pyrolysis temperature on the physical prosperity and ORR performance of NG are also investigated. The obtained results demonstrate that high activation temperature (1000 °C) results in low nitrogen doping level, high graphitization degree, enhanced electrical conductivity, and high surface area and pore volume, which make a synergetic contribution to enhancing the ORR performance for NG. PMID:24099362
Noise-Reduction Benefits Analyzed for Over-the-Wing-Mounted Advanced Turbofan Engines
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2000-01-01
As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Also, the large fan diameters of modern engines with increasingly higher bypass ratios pose significant packaging and aircraft installation challenges. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large diameters and high bypass ratio cycles to continue, this approach allows the wing to shield much of the engine noise from people on the ground. The Propulsion Systems Analysis Office at the NASA Glenn Research Center at Lewis Field conducted independent analytical research to estimate the noise reduction potential of mounting advanced turbofan engines above the wing. Certification noise predictions were made for a notional long-haul commercial quadjet transport. A large quad was chosen because, even under current regulations, such aircraft sometimes experience difficulty in complying with certification noise requirements with a substantial margin. Also, because of its long wing chords, a large airplane would receive the greatest advantage of any noise-shielding benefit.
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Wang, Zhi-Hua; Zhou, Jun-Hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-Ren; Cen, Ke-Fa
2005-03-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15% approximately 25% reburn heat input, temperature range from 1100 degrees C to 1400 degrees C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 degrees C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 degrees C approximately 1100 degrees C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NO(x) Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503
Budget variance analysis using RVUs.
Berlin, M F; Budzynski, M R
1998-01-01
This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247
Roden, E.E.; Urrutia, M.M.
1998-06-01
'Understanding factors which control the long-term survival and activity of Fe(III)-reducing bacteria (FeRB) in subsurface sedimentary environments is important for predicting their ability to serve as agents for bioremediation of organic and inorganic contaminants. This project seeks to refine the authors quantitative understanding of microbiological and geochemical controls on bacterial Fe(III) oxide reduction and growth of FeRB, using laboratory reactor systems which mimic to varying degrees the physical and chemical conditions of subsurface sedimentary environments. Methods for studying microbial Fe(III) oxide reduction and FeRB growth in experimental systems which incorporate advective aqueous phase flux are being developed for this purpose. These methodologies, together with an accumulating database on the kinetics of Fe(III) reduction and bacterial growth with various synthetic and natural Fe(III) oxide minerals, will be applicable to experimental and modeling studies of subsurface contaminant transformations directly coupled to or influenced by bacterial Fe(III) oxide reduction and FeRB activity. This report summarizes research accomplished after approximately 1.5 yr of a 3-yr project. A central hypothesis of the research is that advective elimination of the primary end-product of Fe(III) oxide reduction, Fe(II), will enhance the rate and extent of microbial Fe(III) oxide reduction in open experimental systems. This hypothesis is based on previous studies in the laboratory which demonstrated that association of evolved Fe(II) with oxide and FeRB cell surfaces (via adsorption or surface precipitation) is a primary cause for cessation of Fe(III) oxide reduction activity in batch culture experiments. Semicontinuous culturing was adopted as a first approach to test this basic hypothesis. Synthetic goethite or natural Fe(III) oxide-rich subsoils were used as Fe(III) sources, with the Fe(III)-reducing bacterium Shewanella alga as the test organism.'
Hopkins, Peter M.
2014-01-01
Advanced chronic obstructive pulmonary disease (COPD) is a significant cause of morbidity. Treatment options beyond conventional medical therapies are limited to a minority of patients. Lung volume reduction surgery (LVRS) although effective in selected subgroups of patients is not commonly undertaken. Morbidity associated with the procedure has contributed to this low utilisation. In response to this, less invasive bronchoscopic lung volume techniques are being developed to attempt to mitigate some of the risks and costs associated with surgery. Of these, endobronchial valve therapy is the most comprehensively studied although the presence of collateral ventilation in a significant proportion of patients has compromised its widespread utility. Bronchial thermal vapour ablation and lung volume reduction (LVR) coils are not dependent on collateral ventilation. These techniques have shown promise in early clinical trials; ongoing work will establish whether they have a role in the management of advanced COPD. Lung transplantation, although effective in selected patients for palliation of symptoms and improving survival, is limited by donor organ availability and economic constraint. Reconditioning marginal organs previously declined for transplantation with ex vivo lung perfusion (EVLP) is one potential strategy in improving the utilisation of donor organs. By increasing the donor pool, it is hoped lung transplantation might be more accessible for patients with advanced COPD into the future. PMID:25478204
Trotter, Michael A; Hopkins, Peter M
2014-11-01
Advanced chronic obstructive pulmonary disease (COPD) is a significant cause of morbidity. Treatment options beyond conventional medical therapies are limited to a minority of patients. Lung volume reduction surgery (LVRS) although effective in selected subgroups of patients is not commonly undertaken. Morbidity associated with the procedure has contributed to this low utilisation. In response to this, less invasive bronchoscopic lung volume techniques are being developed to attempt to mitigate some of the risks and costs associated with surgery. Of these, endobronchial valve therapy is the most comprehensively studied although the presence of collateral ventilation in a significant proportion of patients has compromised its widespread utility. Bronchial thermal vapour ablation and lung volume reduction (LVR) coils are not dependent on collateral ventilation. These techniques have shown promise in early clinical trials; ongoing work will establish whether they have a role in the management of advanced COPD. Lung transplantation, although effective in selected patients for palliation of symptoms and improving survival, is limited by donor organ availability and economic constraint. Reconditioning marginal organs previously declined for transplantation with ex vivo lung perfusion (EVLP) is one potential strategy in improving the utilisation of donor organs. By increasing the donor pool, it is hoped lung transplantation might be more accessible for patients with advanced COPD into the future. PMID:25478204
Sorge, J.N.; Menzies, B.; Smouse, S.M.; Stallings, J.W.
1995-09-01
Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide NOx emissions from coal-fired boilers. The primary objective of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control/optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Recent advances in membrane bio-technologies for sludge reduction and treatment.
Wang, Zhiwei; Yu, Hongguang; Ma, Jinxing; Zheng, Xiang; Wu, Zhichao
2013-12-01
This paper is designed to critically review the recent developments of membrane bio-technologies for sludge reduction and treatment by covering process fundamentals, performances (sludge reduction efficiency, membrane fouling, pollutant removal, etc.) and key operational parameters. The future perspectives of the hybrid membrane processes for sludge reduction and treatment are also discussed. For sludge reduction using membrane bioreactors (MBRs), literature review shows that biological maintenance metabolism, predation on bacteria, and uncoupling metabolism through using oxic-settling-anaerobic (OSA) process are promising ways that can be employed in full-scale applications. Development of control methods for worm proliferation is in great need of, and a good sludge reduction and MBR performance can be expected if worm growth is properly controlled. For lysis-cryptic sludge reduction method, improvement of oxidant dispersion and increase of the interaction with sludge cells can enhance the lysis efficiency. Green uncoupler development might be another research direction for uncoupling metabolism in MBRs. Aerobic hybrid membrane system can perform well for sludge thickening and digestion in small- and medium-sized wastewater treatment plants (WWTPs), and pilot-scale/full-scale applications have been reported. Anaerobic membrane digestion (AMD) process is a very competitive technology for sludge stabilization and digestion. Use of biogas recirculation for fouling control can be a powerful way to decrease the energy requirements for AMD process. Future research efforts should be dedicated to membrane preparation for high biomass applications, process optimization, and pilot-scale/full-scale tracking research in order to push forward the real and wide applications of the hybrid membrane systems for sludge minimization and treatment. PMID:23466365
Roden, E.E.; Urrutia, M.M.
1997-07-01
'The authors have made considerable progress toward a number of project objectives during the first several months of activity on the project. An exhaustive analysis was made of the growth rate and biomass yield (both derived from measurements of cell protein production) of two representative strains of Fe(III)-reducing bacteria (Shewanellaalga strain BrY and Geobactermetallireducens) growing with different forms of Fe(III) as an electron acceptor. These two fundamentally different types of Fe(III)-reducing bacteria (FeRB) showed comparable rates of Fe(III) reduction, cell growth, and biomass yield during reduction of soluble Fe(III)-citrate and solid-phase amorphous hydrous ferric oxide (HFO). Intrinsic growth rates of the two FeRB were strongly influenced by whether a soluble or a solid-phase source of Fe(III) was provided: growth rates on soluble Fe(III) were 10--20 times higher than those on solid-phase Fe(III) oxide. Intrinsic FeRB growth rates were comparable during reduction of HF0 and a synthetic crystalline Fe(III) oxide (goethite). A distinct lag phase for protein production was observed during the first several days of incubation in solid-phase Fe(III) oxide medium, even though Fe(III) reduction proceeded without any lag. No such lag between protein production and Fe(III) reduction was observed during growth with soluble Fe(III). This result suggested that protein synthesis coupled to solid-phase Fe(III) oxide reduction in batch culture requires an initial investment of energy (generated by Fe(III) reduction), which is probably needed for synthesis of materials (e.g. extracellular polysaccharides) required for attachment of the cells to oxide surfaces. This phenomenon may have important implications for modeling the growth of FeRB in subsurface sedimentary environments, where attachment and continued adhesion to solid-phase materials will be required for maintenance of Fe(III) reduction activity. Despite considerable differences in the rate and pattern
ADVANCED OXIDATION AND REDUCTION PROCESSES IN THE GAS PHASE USING NON-THERMAL PLASMAS
In the past several years interest in gas-phase pollution control has increased, arising from a larger body of regulations and greater respect for the environment. Advanced oxidation technologies (AOTs), historically used to treat recalcitrant water pollutants via hydroxyl-radica...
NASA Astrophysics Data System (ADS)
Chandrashekar, Anand; Chen, Feng; Lin, Jasmine; Humayun, Raashina; Wongsenakhum, Panya; Chang, Sean; Danek, Michal; Itou, Takamasa; Nakayama, Tomoo; Kariya, Atsushi; Kawaguchi, Masazumi; Hizume, Shunichi
2010-09-01
This paper describes electrical testing results of new tungsten chemical vapor deposition (CVD-W) process concepts that were developed to address the W contact and bitline scaling issues on 55 nm node devices. Contact resistance (Rc) measurements in complementary metal oxide semiconductor (CMOS) devices indicate that the new CVD-W process for sub-32 nm and beyond - consisting of an advanced pulsed nucleation layer (PNL) combined with low resistivity tungsten (LRW) initiation - produces a 20-30% drop in Rc for diffused NiSi contacts. From cross-sectional bright field and dark field transmission electron microscopy (TEM) analysis, such Rc improvement can be attributed to improved plugfill and larger in-feature W grain size with the advanced PNL+LRW process. More experiments that measured contact resistance for different feature sizes point to favorable Rc scaling with the advanced PNL+LRW process. Finally, 40% improvement in line resistance was observed with this process as tested on 55 nm embedded dynamic random access memory (DRAM) devices, confirming that the advanced PNL+LRW process can be an effective metallization solution for sub-32 nm devices.
FINAL REPORT. ADVANCED EXPERIMENTAL ANALYSIS OF CONTROLS ON MICROBIAL FE(III) OXIDE REDUCTION
The objectives of this research project were to refine existing models of microbiological and geochemical controls on Fe(III) oxide reduction, using laboratory reactor systems which mimic to varying degrees the physical and chemical conditions of the subsurface. Novel experimenta...
Advanced Experiment Analysis of controls on Microbial FE(III) Oxide Reduction
Roden, Eric E.; Urrutia, Matilde M.
1999-06-01
Understanding factors which control the long-term survival and activity of Fe(III)-reducing bacteria (FeRB) in subsurface sedimentary environments is important for predicting the ability of these organisms to serve as agents for bioremediation of organic and inorganic contaminants. This project seeks to refine our quantitative understanding of microbiological and geochemical controls on bacterial Fe(III) oxide reduction and growth of FeRB, using laboratory reactor systems which mimic to varying degrees the physical and chemical conditions of subsurface sedimentary environments. Methods for studying microbial Fe(III) oxide reduction and FeRB growth in experimental systems which incorporate advective aqueous phase flux are being developed for this purpose. These methodologies, together with an accumulating database on the kinetics of Fe(III) reduction and bacterial growth with various synthetic and natural Fe(III) oxide minerals, will be applicable to experimental and modeling studies of subsurface contaminant transformations directly coupled to or influenced by bacterial Fe(III) oxide reduction activity.
An investigation into reservoir NOM reduction by UV photolysis and advanced oxidation processes.
Goslan, Emma H; Gurses, Filiz; Banks, Jenny; Parsons, Simon A
2006-11-01
A comparison of four treatment technologies for reduction of natural organic matter (NOM) in a reservoir water was made. The work presented here is a laboratory based evaluation of NOM treatment by UV-C photolysis, UV/H(2)O(2), Fenton's reagent (FR) and photo-Fenton's reagent (PFR). The work investigated ways of reducing the organic load on water treatment works (WTWs) with a view to treating 'in-reservoir' or 'in-pipe' before the water reaches the WTW. The efficiency of each process in terms of NOM removal was determined by measuring UV absorbance at 254 nm (UV(254)) and dissolved organic carbon (DOC). In terms of DOC reduction PFR was the most effective (88% removal after 1 min) however there were interferences when measuring UV(254) which was reduced to a lesser extent (31% after 1 min). In the literature, pH 3 is reported to be the optimal pH for oxidation with FR but here the reduction of UV(254) and DOC was found to be insensitive to pH in the range 3-7. The treatment that was identified as the most effective in terms of NOM reduction and cost effectiveness was PFR. PMID:16765416
ERIC Educational Resources Information Center
Marincean, Simona; Smith, Sheila R.; Fritz, Michael; Lee, Byung Joo; Rizk, Zeinab
2012-01-01
An upper-division laboratory project has been developed as a collaborative investigation of a reaction routinely taught in organic chemistry courses: the reduction of carbonyl compounds by borohydride reagents. Determination of several trends regarding structure-activity relationship was possible because each student contributed his or her results…
NASA Technical Reports Server (NTRS)
Hughes, Christoper E.; Gazzaniga, John A.
2013-01-01
A wind tunnel experiment was conducted in the NASA Glenn Research Center anechoic 9- by 15-Foot Low-Speed Wind Tunnel to investigate two new advanced noise reduction technologies in support of the NASA Fundamental Aeronautics Program Subsonic Fixed Wing Project. The goal of the experiment was to demonstrate the noise reduction potential and effect on fan model performance of the two noise reduction technologies in a scale model Ultra-High Bypass turbofan at simulated takeoff and approach aircraft flight speeds. The two novel noise reduction technologies are called Over-the-Rotor acoustic treatment and Soft Vanes. Both technologies were aimed at modifying the local noise source mechanisms of the fan tip vortex/fan case interaction and the rotor wake-stator interaction. For the Over-the-Rotor acoustic treatment, two noise reduction configurations were investigated. The results showed that the two noise reduction technologies, Over-the-Rotor and Soft Vanes, were able to reduce the noise level of the fan model, but the Over-the-Rotor configurations had a significant negative impact on the fan aerodynamic performance; the loss in fan aerodynamic efficiency was between 2.75 to 8.75 percent, depending on configuration, compared to the conventional solid baseline fan case rubstrip also tested. Performance results with the Soft Vanes showed that there was no measurable change in the corrected fan thrust and a 1.8 percent loss in corrected stator vane thrust, which resulted in a total net thrust loss of approximately 0.5 percent compared with the baseline reference stator vane set.
Littleton, Harry; Griffin, John
2011-07-31
This project was a subtask of Energy Saving Melting and Revert Reduction Technology (Energy SMARRT) Program. Through this project, technologies, such as computer modeling, pattern quality control, casting quality control and marketing tools, were developed to advance the Lost Foam Casting process application and provide greater energy savings. These technologies have improved (1) production efficiency, (2) mechanical properties, and (3) marketability of lost foam castings. All three reduce energy consumption in the metals casting industry. This report summarizes the work done on all tasks in the period of January 1, 2004 through June 30, 2011. Current (2011) annual energy saving estimates based on commercial introduction in 2011 and a market penetration of 97% by 2020 is 5.02 trillion BTU's/year and 6.46 trillion BTU's/year with 100% market penetration by 2023. Along with these energy savings, reduction of scrap and improvement in casting yield will result in a reduction of the environmental emissions associated with the melting and pouring of the metal which will be saved as a result of this technology. The average annual estimate of CO2 reduction per year through 2020 is 0.03 Million Metric Tons of Carbon Equivalent (MM TCE).
External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Niedra, Janis M.; Geng, Steven M.
2013-01-01
Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.
Krakowski, R.A., Bathke, C.G.
1997-12-31
The potential for reducing plutonium inventories in the civilian nuclear fuel cycle through recycle in LWRs of a variety of mixed oxide forms is examined by means of a cost based plutonium flow systems model. This model emphasizes: (1) the minimization of separated plutonium; (2) the long term reduction of spent fuel plutonium; (3) the optimum utilization of uranium resources; and (4) the reduction of (relative) proliferation risks. This parametric systems study utilizes a globally aggregated, long term (approx. 100 years) nuclear energy model that interprets scenario consequences in terms of material inventories, energy costs, and relative proliferation risks associated with the civilian fuel cycle. The impact of introducing nonfertile fuels (NFF,e.g., plutonium oxide in an oxide matrix that contains no uranium) into conventional (LWR) reactors to reduce net plutonium generation, to increase plutonium burnup, and to reduce exo- reactor plutonium inventories also is examined.
ADVANCEMENT OF NUCLEIC ACID-BASED TOOLS FOR MONITORING IN SITU REDUCTIVE DECHLORINATION
Vangelas, K; ELIZABETH EDWARDS, E; FRANK LOFFLER, F; Brian02 Looney, B
2006-11-17
Regulatory protocols generally recognize that destructive processes are the most effective mechanisms that support natural attenuation of chlorinated solvents. In many cases, these destructive processes will be biological processes and, for chlorinated compounds, will often be reductive processes that occur under anaerobic conditions. The existing EPA guidance (EPA, 1998) provides a list of parameters that provide indirect evidence of reductive dechlorination processes. In an effort to gather direct evidence of these processes, scientists have identified key microorganisms and are currently developing tools to measure the abundance and activity of these organisms in subsurface systems. Drs. Edwards and Luffler are two recognized leaders in this field. The research described herein continues their development efforts to provide a suite of tools to enable direct measures of biological processes related to the reductive dechlorination of TCE and PCE. This study investigated the strengths and weaknesses of the 16S rRNA gene-based approach to characterizing the natural attenuation capabilities in samples. The results suggested that an approach based solely on 16S rRNA may not provide sufficient information to document the natural attenuation capabilities in a system because it does not distinguish between strains of organisms that have different biodegradation capabilities. The results of the investigations provided evidence that tools focusing on relevant enzymes for functionally desired characteristics may be useful adjuncts to the 16SrRNA methods.
NASA Astrophysics Data System (ADS)
Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali
2016-05-01
One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.
Latitude dependence of eddy variances
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Bell, Thomas L.
1987-01-01
The eddy variance of a meteorological field must tend to zero at high latitudes due solely to the nature of spherical polar coordinates. The zonal averaging operator defines a length scale: the circumference of the latitude circle. When the circumference of the latitude circle is greater than the correlation length of the field, the eddy variance from transient eddies is the result of differences between statistically independent regions. When the circumference is less than the correlation length, the eddy variance is computed from points that are well correlated with each other, and so is reduced. The expansion of a field into zonal Fourier components is also influenced by the use of spherical coordinates. As is well known, a phenomenon of fixed wavelength will have different zonal wavenumbers at different latitudes. Simple analytical examples of these effects are presented along with an observational example from satellite ozone data. It is found that geometrical effects can be important even in middle latitudes.
NASA Astrophysics Data System (ADS)
Satake, Kenji
2014-12-01
The December 2004 Indian Ocean tsunami was the worst tsunami disaster in the world's history with more than 200,000 casualties. This disaster was attributed to giant size (magnitude M ~ 9, source length >1000 km) of the earthquake, lacks of expectation of such an earthquake, tsunami warning system, knowledge and preparedness for tsunamis in the Indian Ocean countries. In the last ten years, seismology and tsunami sciences as well as tsunami disaster risk reduction have significantly developed. Progress in seismology includes implementation of earthquake early warning, real-time estimation of earthquake source parameters and tsunami potential, paleoseismological studies on past earthquakes and tsunamis, studies of probable maximum size, recurrence variability, and long-term forecast of large earthquakes in subduction zones. Progress in tsunami science includes accurate modeling of tsunami source such as contribution of horizontal components or "tsunami earthquakes", development of new types of offshore and deep ocean tsunami observation systems such as GPS buoys or bottom pressure gauges, deployments of DART gauges in the Pacific and other oceans, improvements in tsunami propagation modeling, and real-time inversion or data assimilation for the tsunami warning. These developments have been utilized for tsunami disaster reduction in the forms of tsunami early warning systems, tsunami hazard maps, and probabilistic tsunami hazard assessments. Some of the above scientific developments helped to reveal the source characteristics of the 2011 Tohoku earthquake, which caused devastating tsunami damage in Japan and Fukushima Dai-ichi Nuclear Power Station accident. Toward tsunami disaster risk reduction, interdisciplinary and trans-disciplinary approaches are needed for scientists with other stakeholders.
Advanced Glycation End Products in Foods and a Practical Guide to Their Reduction in the Diet
URIBARRI, JAIME; WOODRUFF, SANDRA; GOODMAN, SUSAN; CAI, WEIJING; CHEN, XUE; PYZIK, RENATA; YONG, ANGIE; STRIKER, GARY E.; VLASSARA, HELEN
2013-01-01
Modern diets are largely heat-processed and as a result contain high levels of advanced glycation end products (AGEs). Dietary advanced glycation end products (dAGEs) are known to contribute to increased oxidant stress and inflammation, which are linked to the recent epidemics of diabetes and cardiovascular disease. This report significantly expands the available dAGE database, validates the dAGE testing methodology, compares cooking procedures and inhibitory agents on new dAGE formation, and introduces practical approaches for reducing dAGE consumption in daily life. Based on the findings, dry heat promotes new dAGE formation by >10- to 100-fold above the uncooked state across food categories. Animal-derived foods that are high in fat and protein are generally AGE-rich and prone to new AGE formation during cooking. In contrast, carbohydrate-rich foods such as vegetables, fruits, whole grains, and milk contain relatively few AGEs, even after cooking. The formation of new dAGEs during cooking was prevented by the AGE inhibitory compound aminoguanidine and significantly reduced by cooking with moist heat, using shorter cooking times, cooking at lower temperatures, and by use of acidic ingredients such as lemon juice or vinegar. The new dAGE database provides a valuable instrument for estimating dAGE intake and for guiding food choices to reduce dAGE intake. PMID:20497781
Advanced glycation end products in foods and a practical guide to their reduction in the diet.
Uribarri, Jaime; Woodruff, Sandra; Goodman, Susan; Cai, Weijing; Chen, Xue; Pyzik, Renata; Yong, Angie; Striker, Gary E; Vlassara, Helen
2010-06-01
Modern diets are largely heat-processed and as a result contain high levels of advanced glycation end products (AGEs). Dietary advanced glycation end products (dAGEs) are known to contribute to increased oxidant stress and inflammation, which are linked to the recent epidemics of diabetes and cardiovascular disease. This report significantly expands the available dAGE database, validates the dAGE testing methodology, compares cooking procedures and inhibitory agents on new dAGE formation, and introduces practical approaches for reducing dAGE consumption in daily life. Based on the findings, dry heat promotes new dAGE formation by >10- to 100-fold above the uncooked state across food categories. Animal-derived foods that are high in fat and protein are generally AGE-rich and prone to new AGE formation during cooking. In contrast, carbohydrate-rich foods such as vegetables, fruits, whole grains, and milk contain relatively few AGEs, even after cooking. The formation of new dAGEs during cooking was prevented by the AGE inhibitory compound aminoguanidine and significantly reduced by cooking with moist heat, using shorter cooking times, cooking at lower temperatures, and by use of acidic ingredients such as lemon juice or vinegar. The new dAGE database provides a valuable instrument for estimating dAGE intake and for guiding food choices to reduce dAGE intake. PMID:20497781
Suzuki, Y; Kondo, T; Nakagawa, K; Tsuneda, S; Hirata, A; Shimizu, Y; Inamori, Y
2006-01-01
A new biological nutrient removal process, anaerobic-oxic-anoxic (A/O/A) system using denitrifying polyphosphate-accumulating organisms (DNPAOs), was proposed. To attain excess sludge reduction and phosphorus recovery, the A/O/A system equipped with ozonation tank and phosphorus adsorption column was operated for 92 days, and water quality of the effluent, sludge reduction efficiency, and phosphorus recovery efficiency were evaluated. As a result, TOC, T-N and T-P removal efficiency were 85%, 70% and 85%, respectively, throughout the operating period. These slightly lower removal efficiencies than conventional anaerobic-anoxic-oxic (A/A/O) processes were due to the unexpected microbial population in this system where DNPAOs were not the dominant group but normal polyphosphate-accumulating organisms (PAOs) that could not utilize nitrate and nitrite as electron acceptor became dominant. However, it was successfully demonstrated that 34-127% of sludge reduction and around 80% of phosphorus recovery were attained. In conclusion, the A/O/A system equipped with ozonation and phosphorus adsorption systems is useful as a new advanced wastewater treatment plant (WWTP) to resolve the problems of increasing excess sludge and depleted phosphorus. PMID:16749446
Variance of a Few Observations
ERIC Educational Resources Information Center
Joarder, Anwar H.
2009-01-01
This article demonstrates that the variance of three or four observations can be expressed in terms of the range and the first order differences of the observations. A more general result, which holds for any number of observations, is also stated.
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
NASA Technical Reports Server (NTRS)
Wagenknecht, C. D.; Bediako, E. D.
1985-01-01
Advanced Supersonic Transport jet noise may be reduced to Federal Air Regulation limits if recommended refinements to a recently developed ejector shroud exhaust system are successfully carried out. A two-part program consisting of a design study and a subscale model wind tunnel test effort conducted to define an acoustically treated ejector shroud exhaust system for supersonic transport application is described. Coannular, 20-chute, and ejector shroud exhaust systems were evaluated. Program results were used in a mission analysis study to determine aircraft takeoff gross weight to perform a nominal design mission, under Federal Aviation Regulation (1969), Part 36, Stage 3 noise constraints. Mission trade study results confirmed that the ejector shroud was the best of the three exhaust systems studied with a significant takeoff gross weight advantage over the 20-chute suppressor nozzle which was the second best.
Advanced Monitoring of Trace Metals Applied to Contamination Reduction of Silicon Device Processing
NASA Astrophysics Data System (ADS)
Maillot, P.; Martin, C.; Planchais, A.
2011-11-01
The detrimental effects of metallic on certain key electrical parameters of silicon devices mandates the use of state-of-the-art characterization and metrology tools as well as appropriate control plans. Historically, this has been commonly achieved in-line on monitor wafers through a combination of Total Reflectance X-Ray Fluorescence (TXRF) and post anneal Surface Photo Voltage (SPV). On the other hand, VPD (Vapor Phase Decomposition) combined with ICP-MS (Inductively Coupled Mass Spectrometry) or TXRF is known to provide both identification and quantification of surface trace metals at lower detection limits. Based on these considerations the description of an advanced monitoring scheme using SPV, TXRF and automated VPD ICP-MS is described.
NASA Technical Reports Server (NTRS)
Rao, D. M.; Goglia, G. L.
1981-01-01
Accomplishments in vortex flap research are summarized. A singular feature of the vortex flap is that, throughout the range of angle of attack range, the flow type remains qualitatively unchanged. Accordingly, no large or sudden change in the aerodynamic characteristics, as happens when forcibly maintained attached flow suddenly reverts to separation, will occur with the vortex flap. Typical wind tunnel test data are presented which show the drag reduction potential of the vortex flap concept applied to a supersonic cruise airplane configuration. The new technology offers a means of aerodynamically augmenting roll-control effectiveness on slender wings at higher angles of attack by manipulating the vortex flow generated from leading edge separation. The proposed manipulator takes the form of a flap hinged at or close to the leading edge, normally retracted flush with the wing upper surface to conform to the airfoil shape.
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Practice reduces task relevant variance modulation and forms nominal trajectory.
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Practice reduces task relevant variance modulation and forms nominal trajectory
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
ADVANCED BYPRODUCT RECOVERY: DIRECT CATALYTIC REDUCTION OF SO2 TO ELEMENTAL SULFUR
Robert S. Weber
1999-05-01
Arthur D. Little, Inc., together with its commercialization partner, Engelhard Corporation, and its university partner Tufts, investigated a single-step process for direct, catalytic reduction of sulfur dioxide from regenerable flue gas desulfurization processes to the more valuable elemental sulfur by-product. This development built on recently demonstrated SO{sub 2}-reduction catalyst performance at Tufts University on a DOE-sponsored program and is, in principle, applicable to processing of regenerator off-gases from all regenerable SO{sub 2}-control processes. In this program, laboratory-scale catalyst optimization work at Tufts was combined with supported catalyst formulation work at Engelhard, bench-scale supported catalyst testing at Arthur D. Little and market assessments, also by Arthur D. Little. Objectives included identification and performance evaluation of a catalyst which is robust and flexible with regard to choice of reducing gas. The catalyst formulation was improved significantly over the course of this work owing to the identification of a number of underlying phenomena that tended to reduce catalyst selectivity. The most promising catalysts discovered in the bench-scale tests at Tufts were transformed into monolith-supported catalysts at Engelhard. These catalyst samples were tested at larger scale at Arthur D. Little, where the laboratory-scale results were confirmed, namely that the catalysts do effectively reduce sulfur dioxide to elemental sulfur when operated under appropriate levels of conversion and in conditions that do not contain too much water or hydrogen. Ways to overcome those limitations were suggested by the laboratory results. Nonetheless, at the end of Phase I, the catalysts did not exhibit the very stringent levels of activity or selectivity that would have permitted ready scale-up to pilot or commercial operation. Therefore, we chose not to pursue Phase II of this work which would have included further bench-scale testing
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22....22 Variances. EDA may approve variances to the requirements contained in this subpart, provided such variances: (a) Are consistent with the goals of the Economic Adjustment Assistance program and with an...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this chapter may be granted in the same circumstances in which variances may be granted under sections 6(b)...
CD bias reduction in CD-SEM linewidth measurements for advanced lithography
NASA Astrophysics Data System (ADS)
Tanaka, Maki; Meessen, Jeroen; Shishido, Chie; Watanabe, Kenji; Minnaert-Janssen, Ingrid; Vanoppen, Peter
2008-03-01
The linewidth measurement capability of the model-based library (MBL) matching technique was evaluated experimentally. This technique estimates the dimensions and shape of a target pattern by comparing a measured SEM image profile to a library of simulated line scans. The simulation model uses a non-linear least squares method to estimate pattern geometry parameters. To examine the application of MBL matching in an advanced lithography process, a focus-exposure matrix wafer was prepared with a leading-edge immersion lithography tool. The evaluation used 36 sites with target structures having various linewidths from 45 to 200 nm. The measurement accuracy was evaluated by using an atomic force microscope (AFM) as a reference measurement system. The results of a first trial indicated that two or more solutions could exist in the parameter space in MBL matching. To solve this problem, we obtained a rough estimation of the scale parameter in SEM imaging, based on experimental results, in order to add a constraint in the matching process. As a result, the sensitivity to sidewall variation in MBL matching was improved, and the measurement bias was reduced from 22.1 to 16 nm. These results indicate the possibility of improving the CD measurement capability by applying this tool parameter appropriately.
Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua
2014-01-01
Objective This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. Methods CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. Results At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Conclusions Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively. PMID:24664174
2014 U.S. Offshore Wind Market Report: Industry Trends, Technology Advancement, and Cost Reduction
Smith, Aaron; Stehly, Tyler; Walter Musial
2015-09-29
2015 has been an exciting year for the U.S. offshore wind market. After more than 15 years of development work, the U.S. has finally hit a crucial milestone; Deepwater Wind began construction on the 30 MW Block Island Wind Farm (BIWF) in April. A number of other promising projects, however, have run into economic, legal, and political headwinds, generating much speculation about the future of the industry. This slow, and somewhat painful, start to the industry is not without precedent; each country in northern Europe began with pilot-scale, proof-of-concept projects before eventually moving to larger commercial scale installations. Now, after more than a decade of commercial experience, the European industry is set to achieve a new deployment record, with more than 4 GW expected to be commissioned in 2015, with demonstrable progress towards industry-wide cost reduction goals. DWW is leveraging 25 years of European deployment experience; the BIWF combines state-of-the-art technologies such as the Alstom 6 MW turbine with U.S. fabrication and installation competencies. The successful deployment of the BIWF will provide a concrete showcase that will illustrate the potential of offshore wind to contribute to state, regional, and federal goals for clean, reliable power and lasting economic development. It is expected that this initial project will launch the U.S. industry into a phase of commercial development that will position offshore wind to contribute significantly to the electric systems in coastal states by 2030.
Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe
2015-12-01
Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering. PMID:26348428
NASA Technical Reports Server (NTRS)
Beltran, Luis R.
2004-01-01
The Advanced Subsonic Combustor Rig (ASCR) is NASA Glenn Research Center's unique high-pressure, high-temperature combustor facility supporting the emissions reduction element of the Ultra-Efficient Engine Technology (UEET) Project. The facility can simulate combustor inlet test conditions up to a pressure of 900 psig and a temperature of 1200 F (non-vitiated). ASCR completed three sector tests in fiscal year 2003 for General Electric, Pratt & Whitney, and Rolls-Royce North America. This will provide NASA and U.S. engine manufacturers the information necessary to develop future low-emission combustors and will help them to better understand durability and operability at these high pressures and temperatures.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Advanced noise reduction in placental ultrasound imaging using CPU and GPU: a comparative study
NASA Astrophysics Data System (ADS)
Zombori, G.; Ryan, J.; McAuliffe, F.; Rainford, L.; Moran, M.; Brennan, P.
2010-03-01
This paper presents a comparison of different implementations of 3D anisotropic diffusion speckle noise reduction technique on ultrasound images. In this project we are developing a novel volumetric calcification assessment metric for the placenta, and providing a software tool for this purpose. The tool can also automatically segment and visualize (in 3D) ultrasound data. One of the first steps when developing such a tool is to find a fast and efficient way to eliminate speckle noise. Previous works on this topic by Duan, Q. [1] and Sun, Q. [2] have proven that the 3D noise reducing anisotropic diffusion (3D SRAD) method shows exceptional performance in enhancing ultrasound images for object segmentation. Therefore we have implemented this method in our software application and performed a comparative study on the different variants in terms of performance and computation time. To increase processing speed it was necessary to utilize the full potential of current state of the art Graphics Processing Units (GPUs). Our 3D datasets are represented in a spherical volume format. With the aim of 2D slice visualization and segmentation, a "scan conversion" or "slice-reconstruction" step is needed, which includes coordinate transformation from spherical to Cartesian, re-sampling of the volume and interpolation. Combining the noise filtering and slice reconstruction in one process on the GPU, we can achieve close to real-time operation on high quality data sets without the need for down-sampling or reducing image quality. For the GPU programming OpenCL language was used. Therefore the presented solution is fully portable.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
Challenges and opportunities in variance component estimation for animal breeding
Technology Transfer Automated Retrieval System (TEKTRAN)
There have been many advances in variance component estimation (VCE), both in theory and in software, since Dr. Henderson introduced Henderson’s Methods 1, 2, and 3 in 1953. However, many challenges in modern animal breeding are not addressed adequately by current algorithms and software. Examples i...
Neutrino mass without cosmic variance
NASA Astrophysics Data System (ADS)
LoVerde, Marilena
2016-05-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.
Zhang, Shihan; Chen, Han; Xia, Yinfeng; Liu, Nan; Lu, Bi-Hong; Li, Wei
2014-10-01
Anthropogenic nitrogen oxides (NO x ) emitted from the fossil-fuel-fired power plants cause adverse environmental issues such as acid rain, urban ozone smoke, and photochemical smog. A novel chemical absorption-biological reduction (CABR) integrated process under development is regarded as a promising alternative to the conventional selective catalytic reduction processes for NO x removal from the flue gas because it is economic and environmentally friendly. CABR process employs ferrous ethylenediaminetetraacetate [Fe(II)EDTA] as a solvent to absorb the NO x following microbial denitrification of NO x to harmless nitrogen gas. Meanwhile, the absorbent Fe(II)EDTA is biologically regenerated to sustain the adequate NO x removal. Compared with conventional denitrification process, CABR not only enhances the mass transfer of NO from gas to liquid phase but also minimize the impact of oxygen on the microorganisms. This review provides the current advances of the development of the CABR process for NO x removal from the flue gas. PMID:25149446
Analytic variance estimates of Swank and Fano factors
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757
Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.; Menzies, W.R.; Smouse, S.M.; Stallings, J.W.
1997-12-31
This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.
A multi-variance analysis in the time domain
NASA Technical Reports Server (NTRS)
Walter, Todd
1993-01-01
Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.
NASA Astrophysics Data System (ADS)
Pesenson, Meyer; Pesenson, I. Z.; McCollum, B.
2009-05-01
The complexity of multitemporal/multispectral astronomical data sets together with the approaching petascale of such datasets and large astronomical surveys require automated or semi-automated methods for knowledge discovery. Traditional statistical methods of analysis may break down not only because of the amount of data, but mostly because of the increase of the dimensionality of data. Image fusion (combining information from multiple sensors in order to create a composite enhanced image) and dimension reduction (finding lower-dimensional representation of high-dimensional data) are effective approaches to "the curse of dimensionality,” thus facilitating automated feature selection, classification and data segmentation. Dimension reduction methods greatly increase computational efficiency of machine learning algorithms, improve statistical inference and together with image fusion enable effective scientific visualization (as opposed to mere illustrative visualization). The main approach of this work utilizes recent advances in multidimensional image processing, as well as representation of essential structure of a data set in terms of its fundamental eigenfunctions, which are used as an orthonormal basis for the data visualization and analysis. We consider multidimensional data sets and images as manifolds or combinatorial graphs and construct variational splines that minimize certain Sobolev norms. These splines allow us to reconstruct the eigenfunctions of the combinatorial Laplace operator by using only a small portion of the graph. We use the first two or three eigenfunctions for embedding large data sets into two- or three-dimensional Euclidean space. Such reduced data sets allow efficient data organization, retrieval, analysis and visualization. We demonstrate applications of the algorithms to test cases from the Spitzer Space Telescope. This work was carried out with funding from the National Geospatial-Intelligence Agency University Research Initiative
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418
Variance analysis. Part I, Extending flexible budget variance analysis to acuity.
Finkler, S A
1991-01-01
The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
Weinstein, R.E.; Tonnemacher, G.C.
1999-07-01
The Clinton Administration signed the 1997 Kyoto Protocol agreement that would limit US greenhouse gas emissions, of which carbon dioxide (CO{sub 2}) is the most significant. While the Kyoto Protocol has not yet been submitted to the Senate for ratification, in the past, there have been few proposed environmental actions that had continued and wide-spread attention of the press and environmental activists that did not eventually lead to regulation. Since the Kyoto Protocol might lead to future regulation, its implications need investigation by the power industry. Limiting CO{sub 2} emissions affects the ability of the US to generate reliable, low cost electricity, and has tremendous potential impact on electric generating companies with a significant investment in coal-fired generation, and on their customers. This paper explores the implications of reducing coal plant CO{sub 2} by various amounts. The amount of reduction for the US that is proposed in the Kyoto Protocol is huge. The Kyoto Protocol would commit the US to reduce its CO{sub 2} emissions to 7% below 1990 levels. Since 1990, there has been significant growth in US population and the US economy driving carbon emissions 34% higher by year 2010. That means CO{sub 2} would have to be reduced by 30.9%, which is extremely difficult to accomplish. The paper tells why. There are, however, coal-based technologies that should be available in time to make significant reductions in coal-plant CO{sub 2} emissions. Th paper focuses on one plant repowering method that can reduce CO{sub 2} per kWh by 25%, advanced circulating pressurized fluidized bed combustion combined cycle (APFBC) technology, based on results from a recent APFBC repowering concept evaluation of the Carolina Power and Light Company's (CP and L) L.V. Sutton steam station. The replacement of the existing 50-year base of power generating units needed to meet proposed Kyoto Protocol CO{sub 2} reduction commitments would be a massive undertaking. It is
Speed Variance and Its Influence on Accidents.
ERIC Educational Resources Information Center
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for variances. (1) Upon application by...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Variance provision. 52.2183 Section 52...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variance request. 142.41 Section 142...) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Variance provision. 52.2183 Section 52.2183 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions...
Minimum variance beamformer weights revisited.
Moiseev, Alexander; Doesburg, Sam M; Grunau, Ruth E; Ribary, Urs
2015-10-15
Adaptive minimum variance beamformers are widely used analysis tools in MEG and EEG. When the target brain activity presents in the form of spatially localized responses, the procedure usually involves two steps. First, positions and orientations of the sources of interest are determined. Second, the filter weights are calculated and source time courses reconstructed. This last step is the object of the current study. Despite different approaches utilized at the source localization stage, basic expressions for the weights have the same form, dictated by the minimum variance condition. These classic expressions involve covariance matrix of the measured field, which includes contributions from both the sources of interest and the noise background. We show analytically that the same weights can alternatively be obtained, if the full field covariance is replaced with that of the noise, provided the beamformer points to the true sources precisely. In practice, however, a certain mismatch is always inevitable. We show that such mismatch results in partial suppression of the true sources if the traditional weights are used. To avoid this effect, the "alternative" weights based on properly estimated noise covariance should be applied at the second, source time course reconstruction step. We demonstrate mathematically and using simulated and real data that in many situations the alternative weights provide significantly better time course reconstruction quality than the traditional ones. In particular, they a) improve source-level SNR and yield more accurately reconstructed waveforms; b) provide more accurate estimates of inter-source correlations; and c) reduce the adverse influence of the source correlations on the performance of single-source beamformers, which are used most often. Importantly, the alternative weights come at no additional computational cost, as the structure of the expressions remains the same. PMID:26143207
Analysis of Variance Components for Genetic Markers with Unphased Genotypes
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions. PMID:27468297
ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1
Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.; Ibrahim, Ahmad M.; Daily, Charles R.; Evans, Thomas M.; Wagner, John C.; Johnson, Jeffrey O.; Grove, Robert E.
2015-08-01
The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.
Global variance reduction for Monte Carlo reactor physics calculations
Zhang, Q.; Abdel-Khalik, H. S.
2013-07-01
Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 10{sup 3}-10{sup 5} times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)
Simulation testing of unbiasedness of variance estimators
Link, W.A.
1993-01-01
In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2014 CFR
2014-10-01
... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...
Code of Federal Regulations, 2012 CFR
2012-10-01
... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...
Code of Federal Regulations, 2011 CFR
2011-10-01
... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...
Code of Federal Regulations, 2010 CFR
2010-10-01
... advance, partial, or progress payments upon finding of substantial evidence of fraud. 970.5232-1 Section... upon finding of substantial evidence of fraud. As prescribed in 970.3200-1-1, insert the following... Contractor's request for advance, partial, or progress payment is based on fraud. (b) The Contractor shall...
On Some Representations of Sample Variance
ERIC Educational Resources Information Center
Joarder, Anwar H.
2002-01-01
The usual formula for variance depending on rounding off the sample mean lacks precision, especially when computer programs are used for the calculation. The well-known simplification of the total sums of squares does not always give benefit. Since the variance of two observations is easily calculated without the use of a sample mean, and the…
Nonlinear Epigenetic Variance: Review and Simulations
ERIC Educational Resources Information Center
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Variances. 654.402 Section 654.402 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a) An employer may apply for a...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Variances. 654.402 Section 654.402 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances....
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2012 CFR
2012-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2013 CFR
2013-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2014 CFR
2014-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Procedures § 1021.343 Variances. (a) Emergency actions. DOE may take an action without observing all provisions of this part or the CEQ Regulations, in accordance with 40 CFR 1506.11, in emergency situations... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... § 1304.408 Variances. The Vice President or the designee thereof is authorized, following...
Matthews, P B
1997-01-01
1. The human stretch reflex is known to produce a phase advance in the EMG reflexly evoked by sinusoidal stretching, after allowing for the phase lag introduced by simple conduction. Such phase advance counteracts the tendency to tremor introduced by the combined effect of the conduction delay and the slowness of muscle contraction. The present experiments confirm that the EMG advance cannot be attributed solely to the phase advance introduced by the muscle spindles, and show that a major additional contribution is provided by the dynamic properties of individual motoneurones. 2. The surface EMG was recorded from biceps brachii when two different types of sinusoidally varying mechanical stimuli were applied to its tendon at 2-40 Hz. The first was conventional sinusoidal displacement ('stretch'); the spindle discharge would then have been phase advanced. The second was a series of weak taps at 103 Hz, with their amplitude modulated sinusoidally ('modulated vibration'). The overall spindle discharge should then have been in phase with the modulating signal, since the probability of any individual 1 a fibre responding to a tap would increase with its amplitude. The findings with this new stimulus apply to motoneurone excitation by any rhythmic input, whether generated centrally or peripherally. 3. The sinusoidal variation of the EMG elicited by the modulated vibration still showed a delay-adjusted phase advance, but the value was less than that for simple stretching. At 10 Hz the difference was 70-80 deg. This was taken to be the phase advance introduced by the spindles, very slightly underestimated because of the lags produced by tendon compliance in transmitting sinusoidal stretch to the muscle proper. The adjusted phase advance with modulated vibration was taken to represent that introduced by the reflex centres, undistorted by tendon compliance. At 10 Hz the reflex centres produced about the same amount of phase advance as the muscle spindles. 4. At modulation
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Encoding of natural sounds by variance of the cortical local field potential.
Ding, Nai; Simon, Jonathan Z; Shamma, Shihab A; David, Stephen V
2016-06-01
Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex. PMID:26912594
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
A Variance Based Active Learning Approach for Named Entity Recognition
NASA Astrophysics Data System (ADS)
Hassanzadeh, Hamed; Keyvanpour, Mohammadreza
The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.
Functional Analysis of Variance for Association Studies
Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.; Greenwood, Mark C.; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods – SKAT and a previously proposed method based on functional linear models (FLM), – especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256
Another Line for the Analysis of Variance
ERIC Educational Resources Information Center
Brown, Bruce L.; Harshbarger, Thad R.
1976-01-01
A test is developed for hypotheses about the grand mean in the analysis of variance, using the known relationship between the t distribution and the F distribution with 1 df (degree of freedom) for the numerator. (Author/RC)
Variance anisotropy in compressible 3-D MHD
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, Minping; Parashar, Tulasi
2016-06-01
We employ spectral method numerical simulations to examine the dynamical development of anisotropy of the variance, or polarization, of the magnetic and velocity field in compressible magnetohydrodynamic (MHD) turbulence. Both variance anisotropy and spectral anisotropy emerge under influence of a large-scale mean magnetic field B0; these are distinct effects, although sometimes related. Here we examine the appearance of variance parallel to B0, when starting from a highly anisotropic state. The discussion is based on a turbulence theoretic approach rather than a wave perspective. We find that parallel variance emerges over several characteristic nonlinear times, often attaining a quasi-steady level that depends on plasma beta. Consistency with solar wind observations seems to occur when the initial state is dominated by quasi-two-dimensional fluctuations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Nonorthogonal Analysis of Variance Programs: An Evaluation.
ERIC Educational Resources Information Center
Hosking, James D.; Hamer, Robert M.
1979-01-01
Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. PMID:26877207
Sambandam, Sankar; Balakrishnan, Kalpana; Ghosh, Santu; Sadasivam, Arulselvan; Madhav, Satish; Ramasamy, Rengaraj; Samanta, Maitreya; Mukhopadhyay, Krishnendu; Rehman, Hafeez; Ramanathan, Veerabhadran
2015-03-01
Household air pollution from use of solid fuels is a major contributor to the national burden of disease in India. Currently available models of advanced combustion biomass cook-stoves (ACS) report significantly higher efficiencies and lower emissions in the laboratory when compared to traditional cook-stoves, but relatively little is known about household level exposure reductions, achieved under routine conditions of use. We report results from initial field assessments of six commercial ACS models from the states of Tamil Nadu and Uttar Pradesh in India. We monitored 72 households (divided into six arms to each receive an ACS model) for 24-h kitchen area concentrations of PM2.5 and CO before and (1-6 months) after installation of the new stove together with detailed information on fixed and time-varying household characteristics. Detailed surveys collected information on user perceptions regarding acceptability for routine use. While the median percent reductions in 24-h PM2.5 and CO concentrations ranged from 2 to 71% and 10-66%, respectively, concentrations consistently exceeded WHO air quality guideline values across all models raising questions regarding the health relevance of such reductions. Most models were perceived to be sub-optimally designed for routine use often resulting in inappropriate and inadequate levels of use. Household concentration reductions also run the risk of being compromised by high ambient backgrounds from community level solid-fuel use and contributions from surrounding fossil fuel sources. Results indicate that achieving health relevant exposure reductions in solid-fuel using households will require integration of emissions reductions with ease of use and adoption at community scale, in cook-stove technologies. Imminent efforts are also needed to accelerate the progress towards cleaner fuels. PMID:25293811
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-23
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Quantifying variances in comparative RNA secondary structure prediction
2013-01-01
Background With the advancement of next-generation sequencing and transcriptomics technologies, regulatory effects involving RNA, in particular RNA structural changes are being detected. These results often rely on RNA secondary structure predictions. However, current approaches to RNA secondary structure modelling produce predictions with a high variance in predictive accuracy, and we have little quantifiable knowledge about the reasons for these variances. Results In this paper we explore a number of factors which can contribute to poor RNA secondary structure prediction quality. We establish a quantified relationship between alignment quality and loss of accuracy. Furthermore, we define two new measures to quantify uncertainty in alignment-based structure predictions. One of the measures improves on the “reliability score” reported by PPfold, and considers alignment uncertainty as well as base-pair probabilities. The other measure considers the information entropy for SCFGs over a space of input alignments. Conclusions Our predictive accuracy improves on the PPfold reliability score. We can successfully characterize many of the underlying reasons for and variances in poor prediction. However, there is still variability unaccounted for, which we therefore suggest comes from the RNA secondary structure predictive model itself. PMID:23634662
NASA Astrophysics Data System (ADS)
Singh, R.; Mahajan, V.
2014-07-01
In the present work surface hardness investigations have been made on acrylonitrile butadiene styrene (ABS) pattern based investment castings after advancements in shell moulding for replication of biomedical implants. For the present study, a hip joint, made of ABS material, was fabricated as a master pattern by fused deposition modelling (FDM). After preparation of master pattern, mold was prepared by deposition of primary (1°), secondary (2°) and tertiary (3°) coatings with the addition of nylon fibre (1-2 cm in length of 1.5D). This study outlines the surface hardness mechanism for cast component prepared from ABS master pattern after advancement in shell moulding. The results of study highlight that during shell production, fibre modified shells have a much reduced drain time. Further the results are supported by cooling rate and micro structure analysis of casting.
[ADVANCE-ON Trial; How to Achieve Maximum Reduction of Mortality in Patients With Type 2 Diabetes].
Kanorskiĭ, S G
2015-01-01
Of 10,261 patients with type 2 diabetes who survived to the end of a randomized ADVANCE trial 83% were included in the ADVANCE-ON project for observation for 6 years. The difference in the level of blood pressure which had been achieved during 4.5 years of within trial treatment with fixed perindopril/indapamide combination quickly vanished but significant decrease of total and cardiovascular mortality in the group of patients treated with this combination for 4.5 years was sustained during 6 years of post-trial follow-up. The results can be related to gradually weakening protective effect of perindopril/indapamide combination on cardiovascular system, and are indicative of the expedience of long-term use of this antihypertensive therapy for maximal lowering of mortality of patients with diabetes. PMID:26164995
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Discrimination of frequency variance for tonal sequencesa)
Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.
2014-01-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN2, while in the signal interval, the variance of the sequence was σSIG2 (with σSIG2 > σSTAN2). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN2. Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of (σSIG2-σSTAN2) to σSTAN2 yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data. PMID:25480064
A new variance-based global sensitivity analysis technique
NASA Astrophysics Data System (ADS)
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2013-11-01
A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol's indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol's indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol's function and two engineering models are utilized for comparing the W- and Sobol's indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced.
Arnetz, B B
1996-01-01
There is a void of studies concerning occupational health aspects from working with the most advanced forms of information technologies techniques such as are found in some of the world-renowned telecommunication systems development laboratories. However, many of these techniques will later be applied in the regular office environment. We wanted to identify some of the major stressors perceived by advanced telecommunication systems design employees and develop a valid and reliable instrument by which to monitor such stressors. We were also interested in assessing the impact of a controlled prospective stress-reduction program on perceived mental stress and specific psychophysiological parameters. A total of 116 employees were recruited. Sixty-one were offered to participate in one of three stress-reduction training programs (intervention group). The additional 50 functioned as a reference group. After a detailed baseline assessment, including a comprehensive questionnaire and psychophysiological measurements, new assessments were made at the end of the formal training program (+ 3 months) and after an additional 5-month period. Results reveal a significant improvement in the intervention group with regard to circulating levels of the stress-sensitive hormone prolactin as well as an attenuation in mental strain. Cardiovascular risk indicators were also improved. Circulating thrombocytes decreased in the intervention group. Type of stress-reduction programs chosen and intensity of participation did not significantly impact results. Coping style was not affected and no beneficial effects were observed with regard to the psychological characteristics of the work, eg intellectual discretion and control over work processes. The survey instrument is now being used in the continuous improvement of work processes and strategic leadership of occupational health issues. The results suggest that prior psychophysiological stress research, based on low- and medium-skill, rather
Cross-bispectrum computation and variance estimation
NASA Technical Reports Server (NTRS)
Lii, K. S.; Helland, K. N.
1981-01-01
A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.
Inhomogeneity-induced variance of cosmological parameters
NASA Astrophysics Data System (ADS)
Wiegand, A.; Schwarz, D. J.
2012-02-01
Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
NASA Technical Reports Server (NTRS)
Low, John K. C.; Schweiger, Paul S.; Premo, John W.; Barber, Thomas J.; Saiyed, Naseem (Technical Monitor)
2000-01-01
NASA s model-scale nozzle noise tests show that it is possible to achieve a 3 EPNdB jet noise reduction with inwardfacing chevrons and flipper-tabs installed on the primary nozzle and fan nozzle chevrons. These chevrons and tabs are simple devices and are easy to be incorporated into existing short duct separate-flow nonmixed nozzle exhaust systems. However, these devices are expected to cause some small amount of thrust loss relative to the axisymmetric baseline nozzle system. Thus, it is important to have these devices further tested in a calibrated nozzle performance test facility to quantify the thrust performances of these devices. The choice of chevrons or tabs for jet noise suppression would most likely be based on the results of thrust loss performance tests to be conducted by Aero System Engineering (ASE) Inc. It is anticipated that the most promising concepts identified from this program will be validated in full scale engine tests at both Pratt & Whitney and Allied-Signal, under funding from NASA s Engine Validation of Noise Reduction Concepts (EVNRC) programs. This will bring the technology readiness level to the point where the jet noise suppression concepts could be incorporated with high confidence into either new or existing turbofan engines having short-duct, separate-flow nacelles.
Wave propagation analysis using the variance matrix.
Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S
2014-10-01
The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Decomposition of Variance for Spatial Cox Processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2012-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees. PMID:23599558
A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance
NASA Technical Reports Server (NTRS)
Weiss, Marc A.; Greenhall, Charles A.
1996-01-01
An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.
NASA Technical Reports Server (NTRS)
Brausch, J. F.; Motsinger, R. E.; Hoerst, D. J.
1986-01-01
Ten scale-model nozzles were tested in an anechoic free-jet facility to evaluate the acoustic characteristics of a mechanically suppressed inverted-velocity-profile coannular nozzle with an accoustically treated ejector system. The nozzle system used was developed from aerodynamic flow lines evolved in a previous contract, defined to incorporate the restraints imposed by the aerodynamic performance requirements of an Advanced Supersonic Technology/Variable Cycle Engine system through all its mission phases. Accoustic data of 188 test points were obtained, 87 under static and 101 under simulated flight conditions. The tests investigated variables of hardwall ejector application to a coannular nozzle with 20-chute outer annular suppressor, ejector axial positioning, treatment application to ejector and plug surfaces, and treatment design. Laser velocimeter, shadowgraph photograph, aerodynamic static pressure, and temperature measurement were acquired on select models to yield diagnositc information regarding the flow field and aerodynamic performance characteristics of the nozzles.
Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; dos Santos, Luciana Urbano
2014-01-01
This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L−1 and a UV dose (λ = 254 nm) of 5,480 mJcm−2. The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number of G. duodenalis cysts in synthetic and surface water and was most effective in reducing the fluorescence of the cyst walls that were present in the surface water. The AOP showed a higher deleterious potential for G. duodenalis cysts than either peroxidation (H2O2) or photolysis (UV) processes alone. PMID:27379301
78 FR 14122 - Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... OSHA's scaffolds standards for construction (77 FR 46948). Today's notice revoking the variances takes... Safety and Health Act of 1970 (OSH Act; 29 U.S.C. 651, 655) in 1971 (see 36 FR 7340). Paragraphs (a)(4..., construction, and use of scaffolds (61 FR 46026). In the preamble to the final rule, OSHA stated that it...
Multiple Comparison Procedures when Population Variances Differ.
ERIC Educational Resources Information Center
Olejnik, Stephen; Lee, JaeShin
A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…
Variance Anisotropy of Solar Wind fluctuations
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, M.; Osman, K.
2013-12-01
Solar wind observations at MHD scales indicate that the energy associated with velocity and magnetic field fluctuations transverse to the mean magnetic field is typically much larger than that associated with parallel fluctuations [eg, 1]. This is often referred to as variance anisotropy. Various explanations for it have been suggested, including that the fluctuations are predominantly shear Alfven waves [1] and that turbulent dynamics leads to such states [eg, 2]. Here we investigate the origin and strength of such variance anisotropies, using spectral method simulations of the compressible (polytropic) 3D MHD equations. We report on results from runs with initial conditions that are either (i) broadband turbulence or (ii) fluctuations polarized in the same sense as shear Alfven waves. The dependence of the variance anisotropy on the plasma beta and Mach number is examined [3], along with the timescale for any variance anisotropy to develop. Implications for solar wind fluctuations will be discussed. References: [1] Belcher, J. W. and Davis Jr., L. (1971), J. Geophys. Res., 76, 3534. [2] Matthaeus, W. H., Ghosh, S., Oughton, S. and Roberts, D. A. (1996), J. Geophys. Res., 101, 7619. [3] Smith, C. W., B. J. Vasquez and K. Hamilton (2006), J. Geophys. Res., 111, A09111.
Comparing the Variances of Two Dependent Groups.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1990-01-01
Recently, C. E. McCulloch (1987) suggested a modification of the Morgan-Pitman test for comparing the variances of two dependent groups. This paper demonstrates that there are situations where the procedure is not robust. A subsample approach, similar to the Box-Scheffe test, and the Sandvik-Olsson procedure are also assessed. (TJH)
Videotape Project in Child Variance. Final Report.
ERIC Educational Resources Information Center
Morse, William C.; Smith, Judith M.
The design, production, dissemination, and evaluation of a series of videotaped training packages designed to enable teachers, parents, and paraprofessionals to interpret child variance in light of personal and alternative perspectives of behavior are discussed. The goal of each package is to highlight unique contributions of different theoretical…
Testing Variances in Psychological and Educational Research.
ERIC Educational Resources Information Center
Ramsey, Philip H.
1994-01-01
A review of the literature indicates that the two best procedures for testing variances are one that was proposed by O'Brien (1981) and another that was proposed by Brown and Forsythe (1974). An examination of these procedures for a variety of populations confirms their robustness and indicates how optimal power can usually be obtained. (SLD)
Code of Federal Regulations, 2010 CFR
2010-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2013 CFR
2013-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2014 CFR
2014-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2012 CFR
2012-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2011 CFR
2011-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM...
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF CONSTRUCTION IN THE TENNESSEE RIVER SYSTEM AND REGULATION OF STRUCTURES AND OTHER ALTERATIONS...