Sample records for advanced variance reduction

  1. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less

  2. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  3. Some variance reduction methods for numerical stochastic homogenization

    PubMed Central

    Blanc, X.; Le Bris, C.; Legoll, F.

    2016-01-01

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  4. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  5. Ex Post Facto Monte Carlo Variance Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, Thomas E.

    The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less

  6. Importance Sampling Variance Reduction in GRESS ATMOSIM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wakeford, Daniel Tyler

    This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.

  7. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  8. Reduction of variance in spectral estimates for correction of ultrasonic aberration.

    PubMed

    Astheimer, Jeffrey P; Pilkington, Wayne C; Waag, Robert C

    2006-01-01

    A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.

  9. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditionalmore » Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.« less

  10. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  11. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  12. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE PAGES

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    2017-04-06

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  13. Discrete velocity computations with stochastic variance reduction of the Boltzmann equation for gas mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarke, Peter; Varghese, Philip; Goldstein, David

    We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less

  14. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  15. Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.

    The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less

  16. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  17. Deterministic theory of Monte Carlo variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueki, T.; Larsen, E.W.

    1996-12-31

    The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less

  18. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  19. Variance-reduction normalization technique for a compton camera system

    NASA Astrophysics Data System (ADS)

    Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.

    2011-01-01

    For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.

  20. Optimisation of 12 MeV electron beam simulation using variance reduction technique

    NASA Astrophysics Data System (ADS)

    Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul

    2017-05-01

    Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.

  1. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  2. Evaluation of the Advanced Subsonic Technology Program Noise Reduction Benefits

    NASA Technical Reports Server (NTRS)

    Golub, Robert A.; Rawls, John W., Jr.; Russell, James W.

    2005-01-01

    This report presents a detailed evaluation of the aircraft noise reduction technology concepts developed during the course of the NASA/FAA Advanced Subsonic Technology (AST) Noise Reduction Program. In 1992, NASA and the FAA initiated a cosponsored, multi-year program with the U.S. aircraft industry focused on achieving significant advances in aircraft noise reduction. The program achieved success through a systematic development and validation of noise reduction technology. Using the NASA Aircraft Noise Prediction Program, the noise reduction benefit of the technologies that reached a NASA technology readiness level of 5 or 6 were applied to each of four classes of aircraft which included a large four engine aircraft, a large twin engine aircraft, a small twin engine aircraft and a business jet. Total aircraft noise reductions resulting from the implementation of the appropriate technologies for each class of aircraft are presented and compared to the AST program goals.

  3. NASA Noise Reduction Program for Advanced Subsonic Transports

    NASA Technical Reports Server (NTRS)

    Stephens, David G.; Cazier, F. W., Jr.

    1995-01-01

    Aircraft noise is an important byproduct of the world's air transportation system. Because of growing public interest and sensitivity to noise, noise reduction technology is becoming increasingly important to the unconstrained growth and utilization of the air transportation system. Unless noise technology keeps pace with public demands, noise restrictions at the international, national and/or local levels may unduly constrain the growth and capacity of the system to serve the public. In recognition of the importance of noise technology to the future of air transportation as well as the viability and competitiveness of the aircraft that operate within the system, NASA, the FAA and the industry have developed noise reduction technology programs having application to virtually all classes of subsonic and supersonic aircraft envisioned to operate far into the 21st century. The purpose of this paper is to describe the scope and focus of the Advanced Subsonic Technology Noise Reduction program with emphasis on the advanced technologies that form the foundation of the program.

  4. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  5. Symmetry-Based Variance Reduction Applied to 60Co Teletherapy Unit Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Sheikh-Bagheri, D.

    A new variance reduction technique (VRT) is implemented in the BEAM code [1] to specifically improve the efficiency of calculating penumbral distributions of in-air fluence profiles calculated for isotopic sources. The simulations focus on 60Co teletherapy units. The VRT includes splitting of photons exiting the source capsule of a 60Co teletherapy source according to a splitting recipe and distributing the split photons randomly on the periphery of a circle, preserving the direction cosine along the beam axis, in addition to the energy of the photon. It is shown that the use of the VRT developed in this work can lead to a 6-9 fold improvement in the efficiency of the penumbral photon fluence of a 60Co beam compared to that calculated using the standard optimized BEAM code [1] (i.e., one with the proper selection of electron transport parameters).

  6. Space Launch System NASA Research Announcement Advanced Booster Engineering Demonstration and/or Risk Reduction

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Craig, Kellie D.

    2011-01-01

    The intent of the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort is to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Key Concepts (1) Offerors must propose an Advanced Booster concept that meets SLS Program requirements (2) Engineering Demonstration and/or Risk Reduction must relate to the Offeror s Advanced Booster concept (3) NASA Research Announcement (NRA) will not be prescriptive in defining Engineering Demonstration and/or Risk Reduction

  7. Oxidation-Reduction Resistance of Advanced Copper Alloys

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. (Technical Monitor); Thomas-Ogbuji, L.; Humphrey, D. L.; Setlock, J. A.

    2003-01-01

    Resistance to oxidation and blanching is a key issue for advanced copper alloys under development for NASA's next generation of reusable launch vehicles. Candidate alloys, including dispersion-strengthened Cu-Cr-Nb, solution-strengthened Cu-Ag-Zr, and ODS Cu-Al2O3, are being evaluated for oxidation resistance by static TGA exposures in low-p(O2) and cyclic oxidation in air, and by cyclic oxidation-reduction exposures (using air for oxidation and CO/CO2 or H2/Ar for reduction) to simulate expected service environments. The test protocol and results are presented.

  8. Practice reduces task relevant variance modulation and forms nominal trajectory

    NASA Astrophysics Data System (ADS)

    Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.

  9. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  10. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  11. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  12. Principal component of explained variance: An efficient and optimal data dimension reduction framework for association studies.

    PubMed

    Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie

    2018-05-01

    The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.

  13. Analytic variance estimates of Swank and Fano factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less

  14. Genetic Variance in the F2 Generation of Divergently Selected Parents

    Treesearch

    M.P. Koshy; G. Namkoong; J.H. Roberds

    1998-01-01

    Either by selective breeding for population divergence or by using natural population differences, F2 and advanced generation hybrids can be developed with high variances. We relate the size of the genetic variance to the population divergence based on a forward and backward mutation model at a locus with two alleles with additive gene action....

  15. Compressed air noise reductions from using advanced air gun nozzles in research and development environments.

    PubMed

    Prieve, Kurt; Rice, Amanda; Raynor, Peter C

    2017-08-01

    The aims of this study were to evaluate sound levels produced by compressed air guns in research and development (R&D) environments, replace conventional air gun models with advanced noise-reducing air nozzles, and measure changes in sound levels to assess the effectiveness of the advanced nozzles as engineering controls for noise. Ten different R&D manufacturing areas that used compressed air guns were identified and included in the study. A-weighted sound level and Z-weighted octave band measurements were taken simultaneously using a single instrument. In each area, three sets of measurements, each lasting for 20 sec, were taken 1 m away and perpendicular to the air stream of the conventional air gun while a worker simulated typical air gun work use. Two different advanced noise-reducing air nozzles were then installed. Sound level and octave band data were collected for each of these nozzles using the same methods as for the original air guns. Both of the advanced nozzles provided sound level reductions of about 7 dBA, on average. The highest noise reductions measured were 17.2 dBA for one model and 17.7 dBA for the other. In two areas, the advanced nozzles yielded no sound level reduction, or they produced small increases in sound level. The octave band data showed strong similarities in sound level among all air gun nozzles within the 10-1,000 Hz frequency range. However, the advanced air nozzles generally had lower noise contributions in the 1,000-20,000 Hz range. The observed decreases at these higher frequencies caused the overall sound level reductions that were measured. Installing new advanced noise-reducing air nozzles can provide large sound level reductions in comparison to existing conventional nozzles, which has direct benefit for hearing conservation efforts.

  16. Fluid Mechanics, Drag Reduction and Advanced Configuration Aeronautics

    NASA Technical Reports Server (NTRS)

    Bushnell, Dennis M.

    2000-01-01

    This paper discusses Advanced Aircraft configurational approaches across the speed range, which are either enabled, or greatly enhanced, by clever Flow Control. Configurations considered include Channel Wings with circulation control for VTOL (but non-hovering) operation with high cruise speed, strut-braced CTOL transports with wingtip engines and extensive ('natural') laminar flow control, a midwing double fuselage CTOL approach utilizing several synergistic methods for drag-due-to-lift reduction, a supersonic strut-braced configuration with order of twice the L/D of current approaches and a very advanced, highly engine flow-path-integrated hypersonic cruise machine. This paper indicates both the promise of synergistic flow control approaches as enablers for 'Revolutions' in aircraft performance and fluid mechanic 'areas of ignorance' which impede their realization and provide 'target-rich' opportunities for Fluids Research.

  17. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.

    PubMed

    Li, Qiang; Doi, Kunio

    2006-04-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.

  18. Advances in Photocatalytic CO₂ Reduction with Water: A Review.

    PubMed

    Nahar, Samsun; Zain, M F M; Kadhum, Abdul Amir H; Hasan, Hassimi Abu; Hasan, Md Riad

    2017-06-08

    In recent years, the increasing level of CO₂ in the atmosphere has not only contributed to global warming but has also triggered considerable interest in photocatalytic reduction of CO₂. The reduction of CO₂ with H₂O using sunlight is an innovative way to solve the current growing environmental challenges. This paper reviews the basic principles of photocatalysis and photocatalytic CO₂ reduction, discusses the measures of the photocatalytic efficiency and summarizes current advances in the exploration of this technology using different types of semiconductor photocatalysts, such as TiO₂ and modified TiO₂, layered-perovskite Ag/ALa₄Ti₄O 15 (A = Ca, Ba, Sr), ferroelectric LiNbO₃, and plasmonic photocatalysts. Visible light harvesting, novel plasmonic photocatalysts offer potential solutions for some of the main drawbacks in this reduction process. Effective plasmonic photocatalysts that have shown reduction activities towards CO₂ with H₂O are highlighted here. Although this technology is still at an embryonic stage, further studies with standard theoretical and comprehensive format are suggested to develop photocatalysts with high production rates and selectivity. Based on the collected results, the immense prospects and opportunities that exist in this technique are also reviewed here.

  19. Advanced CO2 Removal and Reduction System

    NASA Technical Reports Server (NTRS)

    Alptekin, Gokhan; Dubovik, Margarita; Copeland, Robert J.

    2011-01-01

    An advanced system for removing CO2 and H2O from cabin air, reducing the CO2, and returning the resulting O2 to the air is less massive than is a prior system that includes two assemblies . one for removal and one for reduction. Also, in this system, unlike in the prior system, there is no need to compress and temporarily store CO2. In this present system, removal and reduction take place within a single assembly, wherein removal is effected by use of an alkali sorbent and reduction is effected using a supply of H2 and Ru catalyst, by means of the Sabatier reaction, which is CO2 + 4H2 CH4 + O2. The assembly contains two fixed-bed reactors operating in alternation: At first, air is blown through the first bed, which absorbs CO2 and H2O. Once the first bed is saturated with CO2 and H2O, the flow of air is diverted through the second bed and the first bed is regenerated by supplying it with H2 for the Sabatier reaction. Initially, the H2 is heated to provide heat for the regeneration reaction, which is endothermic. In the later stages of regeneration, the Sabatier reaction, which is exothermic, supplies the heat for regeneration.

  20. Active Vibration Reduction of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Metscher, Jonathan F.; Schifer, Nicholas A.

    2016-01-01

    Stirling Radioisotope Power Systems (RPS) are being developed as an option to provide power on future space science missions where robotic spacecraft will orbit, flyby, land or rove. A Stirling Radioisotope Generator (SRG) could offer space missions a more efficient power system that uses one fourth of the nuclear fuel and decreases the thermal footprint compared to the current state of the art. The Stirling Cycle Technology Development (SCTD) Project is funded by the RPS Program to developing Stirling-based subsystems, including convertors and controller maturation efforts that have resulted in high fidelity hardware like the Advanced Stirling Radioisotope Generator (ASRG), Advanced Stirling Convertor (ASC), and ASC Controller Unit (ACU). The SCTD Project also performs research to develop less mature technologies with a wide variety of objectives, including increasing temperature capability to enable new environments, improving system reliability or fault tolerance, reducing mass or size, and developing advanced concepts that are mission enabling. Active vibration reduction systems (AVRS), or "balancers", have historically been developed and characterized to provide fault tolerance for generator designs that incorporate dual-opposed Stirling convertors or enable single convertor, or small RPS, missions. Balancers reduce the dynamic disturbance forces created by the power piston and displacer internal moving components of a single operating convertor to meet spacecraft requirements for induced disturbance force. To improve fault tolerance for dual-opposed configurations and enable single convertor configurations, a breadboard AVRS was implemented on the Advanced Stirling Convertor (ASC). The AVRS included a linear motor, a motor mount, and a closed-loop controller able to balance out the transmitted peak dynamic disturbance using acceleration feedback. Test objectives included quantifying power and mass penalty and reduction in transmitted force over a range of ASC

  1. Active Vibration Reduction of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Metscher, Jonathan F.; Schifer, Nicholas A.

    2016-01-01

    Stirling Radioisotope Power Systems (RPS) are being developed as an option to provide power on future space science missions where robotic spacecraft will orbit, flyby, land or rove. A Stirling Radioisotope Generator (SRG) could offer space missions a more efficient power system that uses one fourth of the nuclear fuel and decreases the thermal footprint compared to the current state of the art. The Stirling Cycle Technology Development (SCTD) Project is funded by the RPS Program to developing Stirling-based subsystems, including convertors and controller maturation efforts that have resulted in high fidelity hardware like the Advanced Stirling Radioisotope Generator (ASRG), Advanced Stirling Convertor (ASC), and ASC Controller Unit (ACU). The SCTD Project also performs research to develop less mature technologies with a wide variety of objectives, including increasing temperature capability to enable new environments, improving system reliability or fault tolerance, reducing mass or size, and developing advanced concepts that are mission enabling. Active vibration reduction systems (AVRS), or "balancers", have historically been developed and characterized to provide fault tolerance for generator designs that incorporate dual-opposed Stirling convertors or enable single convertor, or small RPS, missions. Balancers reduce the dynamic disturbance forces created by the power piston and displacer internal moving components of a single operating convertor to meet spacecraft requirements for induced disturbance force. To improve fault tolerance for dual-opposed configurations and enable single convertor configurations, a breadboard AVRS was implemented on the Advanced Stirling Convertor (ASC). The AVRS included a linear motor, a motor mount, and a closed-loop controller able to balance out the transmitted peak dynamic disturbance using acceleration feedback. Test objectives included quantifying power and mass penalty and reduction in transmitted force over a range of ASC

  2. Variance reduction through robust design of boundary conditions for stochastic hyperbolic systems of equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordström, Jan, E-mail: jan.nordstrom@liu.se; Wahlsten, Markus, E-mail: markus.wahlsten@liu.se

    We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for themore » Euler equations.« less

  3. NASA's Space Launch System Advanced Booster Engineering Demonstration and Risk Reduction Efforts

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; May, Todd; Dumbacher, Daniel

    2012-01-01

    The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, and its stated intent was to reduce risks leading to an affordable Advanced Booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the Advanced Boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the Advanced Boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit, opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable Advanced Booster that meets the SLS performance requirements. Demonstrations and

  4. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

  5. Quantifying cosmic variance

    NASA Astrophysics Data System (ADS)

    Driver, Simon P.; Robotham, Aaron S. G.

    2010-10-01

    We determine an expression for the cosmic variance of any `normal' galaxy survey based on examination of M* +/- 1 mag galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) data cube. We find that cosmic variance will depend on a number of factors principally: total survey volume, survey aspect ratio and whether the area surveyed is contiguous or comprising independent sightlines. As a rule of thumb cosmic variance falls below 10 per cent once a volume of 107h-30.7Mpc3 is surveyed for a single contiguous region with a 1:1 aspect ratio. Cosmic variance will be lower for higher aspect ratios and/or non-contiguous surveys. Extrapolating outside our test region we infer that cosmic variance in the entire SDSS DR7 main survey region is ~7 per cent to z < 0.1. The equation obtained from the SDSS DR7 region can be generalized to estimate the cosmic variance for any density measurement determined from normal galaxies (e.g. luminosity densities, stellar mass densities and cosmic star formation rates) within the volume range 103-107h-30.7Mpc3. We apply our equation to show that two sightlines are required to ensure that cosmic variance is <10 per cent in any ASKAP galaxy survey (divided into Δ z ~ 0.1 intervals, i.e. ~1Gyr intervals for z < 0.5). Likewise 10 MeerKAT sightlines will be required to meet the same conditions. GAMA, VVDS and zCOSMOS all suffer less than 10 per cent cosmic variance (~3-8 per cent) in Δ z intervals of 0.1, 0.25 and 0.5, respectively. Finally we show that cosmic variance is potentially at the 50-70 per cent level, or greater, in the Hubble Space Telescope (HST) Ultra Deep Field depending on assumptions as to the evolution of clustering. 100 or 10 independent sightlines will be required to reduce cosmic variance to a manageable level (<10 per cent) for HST ACS or HST WFC3 surveys, respectively (in Δ z ~ 1 intervals). Cosmic variance is therefore a significant factor in the z > 6 HST studies currently underway.

  6. Advances in Photocatalytic CO2 Reduction with Water: A Review

    PubMed Central

    Nahar, Samsun; Zain, M. F. M.; Kadhum, Abdul Amir H.; Hasan, Hassimi Abu; Hasan, Md. Riad

    2017-01-01

    In recent years, the increasing level of CO2 in the atmosphere has not only contributed to global warming but has also triggered considerable interest in photocatalytic reduction of CO2. The reduction of CO2 with H2O using sunlight is an innovative way to solve the current growing environmental challenges. This paper reviews the basic principles of photocatalysis and photocatalytic CO2 reduction, discusses the measures of the photocatalytic efficiency and summarizes current advances in the exploration of this technology using different types of semiconductor photocatalysts, such as TiO2 and modified TiO2, layered-perovskite Ag/ALa4Ti4O15 (A = Ca, Ba, Sr), ferroelectric LiNbO3, and plasmonic photocatalysts. Visible light harvesting, novel plasmonic photocatalysts offer potential solutions for some of the main drawbacks in this reduction process. Effective plasmonic photocatalysts that have shown reduction activities towards CO2 with H2O are highlighted here. Although this technology is still at an embryonic stage, further studies with standard theoretical and comprehensive format are suggested to develop photocatalysts with high production rates and selectivity. Based on the collected results, the immense prospects and opportunities that exist in this technique are also reviewed here. PMID:28772988

  7. User Interface for the ESO Advanced Data Products Image Reduction Pipeline

    NASA Astrophysics Data System (ADS)

    Rité, C.; Delmotte, N.; Retzlaff, J.; Rosati, P.; Slijkhuis, R.; Vandame, B.

    2006-07-01

    The poster presents a friendly user interface for image reduction, totally written in Python and developed by the Advanced Data Products (ADP) group. The interface is a front-end to the ESO/MVM image reduction package, originally developed in the ESO Imaging Survey (EIS) project and used currently to reduce imaging data from several instruments such as WFI, ISAAC, SOFI and FORS1. As part of its scope, the interface produces high-level, VO-compliant, science images from raw data providing the astronomer with a complete monitoring system during the reduction, computing also statistical image properties for data quality assessment. The interface is meant to be used for VO services and it is free but un-maintained software and the intention of the authors is to share code and experience. The poster describes the interface architecture and current capabilities and give a description of the ESO/MVM engine for image reduction. The ESO/MVM engine should be released by the end of this year.

  8. 20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION FACILITY USED TO CUT PLUTONIUM CONTAMINATED GLOVE BOXES AND MISCELLANEOUS LARGE EQUIPMENT DOWN TO AN EASILY PACKAGED SIZE FOR DISPOSAL. ROUTINE OPERATIONS WERE PERFORMED REMOTELY, USING HOISTS, MANIPULATOR ARMS, AND GLOVE PORTS TO REDUCE BOTH INTENSITY AND TIME OF RADIATION EXPOSURE TO THE OPERATOR. (11/6/86) - Rocky Flats Plant, Plutonium Fabrication, Central section of Plant, Golden, Jefferson County, CO

  9. NASA's Space Launch System Advanced Booster Engineering Demonstration and/or Risk Reduction Efforts

    NASA Technical Reports Server (NTRS)

    Crumbly, Christopher M.; Dumbacher, Daniel L.; May, Todd A.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters (SRBs) for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, with a stated intent to reduce risks leading to an affordable advanced booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the advanced boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the advanced boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit (BEO), opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable advanced booster that meets the SLS performance requirements

  10. Effects of process variables and kinetics on the degradation of 2,4-dichlorophenol using advanced reduction processes (ARP).

    PubMed

    Yu, Xingyue; Cabooter, Deirdre; Dewil, Raf

    2018-05-24

    This study aims at investigating the efficiency and kinetics of 2,4-DCP degradation via advanced reduction processes (ARP). Using UV light as activation method, the highest degradation efficiency of 2,4-DCP was obtained when using sulphite as a reducing agent. The highest degradation efficiency was observed under alkaline conditions (pH = 10.0), for high sulphite dosage and UV intensity, and low 2,4-DCP concentration. For all process conditions, first-order reaction rate kinetics were applicable. A quadratic polynomial equation fitted by a Box-Behnken Design was used as a statistical model and proved to be precise and reliable in describing the significance of the different process variables. The analysis of variance demonstrated that the experimental results were in good agreement with the predicted model (R 2  = 0.9343), and solution pH, sulphite dose and UV intensity were found to be key process variables in the sulphite/UV ARP. Consequently, the present study provides a promising approach for the efficient degradation of 2,4-DCP with fast degradation kinetics. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  12. VARIANCE ANISOTROPY IN KINETIC PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Tulasi N.; Matthaeus, William H.; Oughton, Sean

    Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solarmore » wind observations.« less

  13. Coefficient of Variance as Quality Criterion for Evaluation of Advanced Hepatic Fibrosis Using 2D Shear-Wave Elastography.

    PubMed

    Lim, Sanghyeok; Kim, Seung Hyun; Kim, Yongsoo; Cho, Young Seo; Kim, Tae Yeob; Jeong, Woo Kyoung; Sohn, Joo Hyun

    2018-02-01

    To compare the diagnostic performance for advanced hepatic fibrosis measured by 2D shear-wave elastography (SWE), using either the coefficient of variance (CV) or the interquartile range divided by the median value (IQR/M) as quality criteria. In this retrospective study, from January 2011 to December 2013, 96 patients, who underwent both liver stiffness measurement by 2D SWE and liver biopsy for hepatic fibrosis grading, were enrolled. The diagnostic performances of the CV and the IQR/M were analyzed using receiver operating characteristic curves with areas under the curves (AUCs) and were compared by Fisher's Z test, based on matching the cutoff points in an interactive dot diagram. All P values less than 0.05 were considered significant. When using the cutoff value IQR/M of 0.21, the matched cutoff point of CV was 20%. When a cutoff value of CV of 20% was used, the diagnostic performance for advanced hepatic fibrosis ( ≥ F3 grade) with CV of less than 20% was better than that in the group with CV greater than or equal to 20% (AUC 0.967 versus 0.786, z statistic = 2.23, P = .025), whereas when the matched cutoff value IQR/M of 0.21 showed no difference (AUC 0.918 versus 0.927, z statistic = -0.178, P = .859). The validity of liver stiffness measurements made by 2D SWE for assessing advanced hepatic fibrosis may be judged using CVs, and when the CV is less than 20% it can be considered "more reliable" than using IQR/M of less than 0.21. © 2017 by the American Institute of Ultrasound in Medicine.

  14. Recent Advances in Inorganic Heterogeneous Electrocatalysts for Reduction of Carbon Dioxide.

    PubMed

    Zhu, Dong Dong; Liu, Jin Long; Qiao, Shi Zhang

    2016-05-01

    In view of the climate changes caused by the continuously rising levels of atmospheric CO2 , advanced technologies associated with CO2 conversion are highly desirable. In recent decades, electrochemical reduction of CO2 has been extensively studied since it can reduce CO2 to value-added chemicals and fuels. Considering the sluggish reaction kinetics of the CO2 molecule, efficient and robust electrocatalysts are required to promote this conversion reaction. Here, recent progress and opportunities in inorganic heterogeneous electrocatalysts for CO2 reduction are discussed, from the viewpoint of both experimental and computational aspects. Based on elemental composition, the inorganic catalysts presented here are classified into four groups: metals, transition-metal oxides, transition-metal chalcogenides, and carbon-based materials. However, despite encouraging accomplishments made in this area, substantial advances in CO2 electrolysis are still needed to meet the criteria for practical applications. Therefore, in the last part, several promising strategies, including surface engineering, chemical modification, nanostructured catalysts, and composite materials, are proposed to facilitate the future development of CO2 electroreduction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Variance-Stable R-Estimators.

    DTIC Science & Technology

    1984-05-01

    By means of the concept of change-of variance function we investigate the stability properties of the asymptotic variance of R-estimators. This allows us to construct the optimal V-robust R-estimator that minimizes the asymptotic variance at the model, under the side condition of a bounded change-of variance function. Finally, we discuss the connection between this function and an influence function for two-sample rank tests introduced by Eplett (1980). (Author)

  16. Experiment and mechanism investigation on advanced reburning for NOx reduction: influence of CO and temperature

    PubMed Central

    Wang, Zhi-hua; Zhou, Jun-hu; Zhang, Yan-wei; Lu, Zhi-min; Fan, Jian-ren; Cen, Ke-fa

    2005-01-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 °C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 °C~1100 °C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503

  17. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andy; Greene, William D.

    2017-01-01

    Goals of NASA's Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS. (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. SLS Block 1 vehicle is being designed to carry 70 mT to LEO: (1) Uses two five-segment solid rocket boosters (SRBs) similar to the boosters that helped power the space shuttle to orbit. Evolved 130 mT payload class rocket requires an advanced booster with more thrust than any existing U.S. liquid-or solid-fueled boosters

  18. A Cosmic Variance Cookbook

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter

    2011-04-01

    Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is

  19. Spectral Ambiguity of Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  20. Mutilating Data and Discarding Variance: The Dangers of Dichotomizing Continuous Variables.

    ERIC Educational Resources Information Center

    Kroff, Michael W.

    This paper reviews issues involved in converting continuous variables to nominal variables to be used in the OVA techniques. The literature dealing with the dangers of dichotomizing continuous variables is reviewed. First, the assumptions invoked by OVA analyses are reviewed in addition to concerns regarding the loss of variance and a reduction in…

  1. Potential reduction of en route noise from an advanced turboprop aircraft

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.

    1990-01-01

    When the en route noise of a representative aircraft powered by an eight-blade SR-7 propeller was previously calculated, the noise level was cited as a possible concern associated with the acceptance of advanced turboprop aircraft. Some potential methods for reducing the en route noise were then investigated and are reported. Source noise reductions from increasing the blade number and from operating at higher rotative speed to reach a local minimum noise point were investigated. Greater atmospheric attenuations for higher blade passing frequencies were also indicated. Potential en route noise reductions from these methods were calculated as 9.5 dB (6.5 dB(A)) for a 10-blade redesigned propeller and 15.5 dB (11 dB(A)) for a 12-blade redesigned propeller.

  2. Experiment and mechanism investigation on advanced reburning for NO(x) reduction: influence of CO and temperature.

    PubMed

    Wang, Zhi-Hua; Zhou, Jun-Hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-Ren; Cen, Ke-Fa

    2005-03-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15% approximately 25% reburn heat input, temperature range from 1100 degrees C to 1400 degrees C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 degrees C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 degrees C approximately 1100 degrees C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NO(x) Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures.

  3. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less

  4. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  5. Variance Assistance Document: Land Disposal Restrictions Treatability Variances and Determinations of Equivalent Treatment

    EPA Pesticide Factsheets

    This document provides assistance to those seeking to submit a variance request for LDR treatability variances and determinations of equivalent treatment regarding the hazardous waste land disposal restrictions program.

  6. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... practical difficulty or unnecessary hardship; and (3) Clearly set forth the specific alternative measures... variance. (b) Upon receipt of a written request for a variance under paragraph (a) of this section, the...

  7. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... practical difficulty or unnecessary hardship; and (3) Clearly set forth the specific alternative measures... variance. (b) Upon receipt of a written request for a variance under paragraph (a) of this section, the...

  8. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... practical difficulty or unnecessary hardship; and (3) Clearly set forth the specific alternative measures... variance. (b) Upon receipt of a written request for a variance under paragraph (a) of this section, the...

  9. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... practical difficulty or unnecessary hardship; and (3) Clearly set forth the specific alternative measures... variance. (b) Upon receipt of a written request for a variance under paragraph (a) of this section, the...

  10. Variance component and breeding value estimation for genetic heterogeneity of residual variance in Swedish Holstein dairy cattle.

    PubMed

    Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E

    2013-04-01

    Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Advanced Booster Composite Case/Polybenzimidazole Nitrile Butadiene Rubber Insulation Development

    NASA Technical Reports Server (NTRS)

    Gentz, Steve; Taylor, Robert; Nettles, Mindy

    2015-01-01

    The NASA Engineering and Safety Center (NESC) was requested to examine processing sensitivities (e.g., cure temperature control/variance, debonds, density variations) of polybenzimidazole nitrile butadiene rubber (PBI-NBR) insulation, case fiber, and resin systems and to evaluate nondestructive evaluation (NDE) and damage tolerance methods/models required to support human-rated composite motor cases. The proposed use of composite motor cases in Blocks IA and II was expected to increase performance capability through optimizing operating pressure and increasing propellant mass fraction. This assessment was to support the evaluation of risk reduction for large booster component development/fabrication, NDE of low mass-to-strength ratio material structures, and solid booster propellant formulation as requested in the Space Launch System NASA Research Announcement for Advanced Booster Engineering Demonstration and/or Risk Reduction. Composite case materials and high-energy propellants represent an enabling capability in the Agency's ability to provide affordable, high-performing advanced booster concepts. The NESC team was requested to provide an assessment of co- and multiple-cure processing of composite case and PBI-NBR insulation materials and evaluation of high-energy propellant formulations.

  12. The influence of local spring temperature variance on temperature sensitivity of spring phenology.

    PubMed

    Wang, Tao; Ottlé, Catherine; Peng, Shushi; Janssens, Ivan A; Lin, Xin; Poulter, Benjamin; Yue, Chao; Ciais, Philippe

    2014-05-01

    The impact of climate warming on the advancement of plant spring phenology has been heavily investigated over the last decade and there exists great variability among plants in their phenological sensitivity to temperature. However, few studies have explicitly linked phenological sensitivity to local climate variance. Here, we set out to test the hypothesis that the strength of phenological sensitivity declines with increased local spring temperature variance, by synthesizing results across ground observations. We assemble ground-based long-term (20-50 years) spring phenology database (PEP725 database) and the corresponding climate dataset. We find a prevalent decline in the strength of phenological sensitivity with increasing local spring temperature variance at the species level from ground observations. It suggests that plants might be less likely to track climatic warming at locations with larger local spring temperature variance. This might be related to the possibility that the frost risk could be higher in a larger local spring temperature variance and plants adapt to avoid this risk by relying more on other cues (e.g., high chill requirements, photoperiod) for spring phenology, thus suppressing phenological responses to spring warming. This study illuminates that local spring temperature variance is an understudied source in the study of phenological sensitivity and highlight the necessity of incorporating this factor to improve the predictability of plant responses to anthropogenic climate change in future studies. © 2013 John Wiley & Sons Ltd.

  13. Potential for Landing Gear Noise Reduction on Advanced Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Nickol, Craig L.; Burley, Casey L.; Guo, Yueping

    2016-01-01

    The potential of significantly reducing aircraft landing gear noise is explored for aircraft configurations with engines installed above the wings or the fuselage. An innovative concept is studied that does not alter the main gear assembly itself but does shorten the main strut and integrates the gear in pods whose interior surfaces are treated with acoustic liner. The concept is meant to achieve maximum noise reduction so that main landing gears can be eliminated as a major source of airframe noise. By applying this concept to an aircraft configuration with 2025 entry-into-service technology levels, it is shown that compared to noise levels of current technology, the main gear noise can be reduced by 10 EPNL dB, bringing the main gear noise close to a floor established by other components such as the nose gear. The assessment of the noise reduction potential accounts for design features for the advanced aircraft configuration and includes the effects of local flow velocity in and around the pods, gear noise reflection from the airframe, and reflection and attenuation from acoustic liner treatment on pod surfaces and doors. A technical roadmap for maturing this concept is discussed, and the possible drag increase at cruise due to the addition of the pods is identified as a challenge, which needs to be quantified and minimized possibly with the combination of detailed design and application of drag reduction technologies.

  14. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  15. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.

    1997-12-31

    This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advancedmore » digital control/optimization phase of the project.« less

  16. Clinical Use of High-Intensity Focused Ultrasound (HIFU) for Tumor and Pain Reduction in Advanced Pancreatic Cancer.

    PubMed

    Strunk, H M; Henseler, J; Rauch, M; Mücke, M; Kukuk, G; Cuhls, H; Radbruch, L; Zhang, L; Schild, H H; Marinova, M

    2016-07-01

    Evaluation of ultrasound-guided high-intensity focused ultrasound (HIFU) used for the first time in Germany in patients with inoperable pancreatic cancer for reduction of tumor volume and relief of tumor-associated pain. 15 patients with locally advanced inoperable pancreatic cancer and tumor-related pain symptoms were treated by HIFU (n = 6 UICC stage III, n = 9 UICC stage IV). 13 patients underwent simultaneous standard chemotherapy. Ablation was performed using the JC HIFU system (Chongqing, China HAIFU Company) with an ultrasonic device for real-time imaging. Imaging follow-up (US, CT, MRI) and clinical assessment using validated questionnaires (NRS, BPI) was performed before and up to 15 months after HIFU. Despite biliary or duodenal stents (4/15) and encasement of visceral vessels (15/15), HIFU treatment was performed successfully in all patients. Treatment time and sonication time were 111 min and 1103 s, respectively. The applied total energy was 386 768 J. After HIFU ablation, contrast-enhanced imaging showed devascularization of treated tumor regions with a significant average volume reduction of 63.8 % after 3 months. Considerable pain relief was achieved in 12 patients after HIFU (complete or partial pain reduction in 6 patients). US-guided HIFU with a suitable acoustic pathway can be used for local tumor control and relief of tumor-associated pain in patients with locally advanced pancreatic cancer. • US-guided HIFU allows an additive treatment of unresectable pancreatic cancer.• HIFU can be used for tumor volume reduction.• Using HIFU, a significant reduction of cancer-related pain was achieved.• HIFU provides clinical benefit in patients with pancreatic cancer. Citation Format: • Strunk HM, Henseler J, Rauch M et al. Clinical Use of High-Intensity Focused Ultrasound (HIFU) for Tumor and Pain Reduction in Advanced Pancreatic Cancer. Fortschr Röntgenstr 2016; 188: 662 - 670. © Georg Thieme Verlag KG

  17. Bias and variance reduction in estimating the proportion of true-null hypotheses

    PubMed Central

    Cheng, Yebin; Gao, Dexiang; Tong, Tiejun

    2015-01-01

    When testing a large number of hypotheses, estimating the proportion of true nulls, denoted by \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document}, becomes increasingly important. This quantity has many applications in practice. For instance, a reliable estimate of \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} can eliminate the conservative bias of the Benjamini–Hochberg procedure on controlling the false discovery rate. It is known that most methods in the literature for estimating \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} are conservative. Recently, some attempts have been paid to reduce such estimation bias. Nevertheless, they are either over bias corrected or suffering from an unacceptably large estimation variance. In this paper, we propose a new method for estimating \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$\\pi _0$\\end{document} that aims to reduce the bias and variance of the estimation simultaneously. To achieve this, we first utilize the probability density functions of false-null \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage

  18. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  19. Variance of transionospheric VLF wave power absorption

    NASA Astrophysics Data System (ADS)

    Tao, X.; Bortnik, J.; Friedrich, M.

    2010-07-01

    To investigate the effects of D-region electron-density variance on wave power absorption, we calculate the power reduction of very low frequency (VLF) waves propagating through the ionosphere with a full wave method using the standard ionospheric model IRI and in situ observational data. We first verify the classic absorption curves of Helliwell's using our full wave code. Then we show that the IRI model gives overall smaller wave absorption compared with Helliwell's. Using D-region electron densities measured by rockets during the past 60 years, we demonstrate that the power absorption of VLF waves is subject to large variance, even though Helliwell's absorption curves are within ±1 standard deviation of absorption values calculated from data. Finally, we use a subset of the rocket data that are more representative of the D region of middle- and low-latitude VLF wave transmitters and show that the average quiet time wave absorption is smaller than that of Helliwell's by up to 100 dB at 20 kHz and 60 dB at 2 kHz, which would make the model-observation discrepancy shown by previous work even larger. This result suggests that additional processes may be needed to explain the discrepancy.

  20. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  1. Recent advances in the design of tailored nanomaterials for efficient oxygen reduction reaction

    DOE PAGES

    Lv, Haifeng; Li, Dongguo; Strmcnik, Dusan; ...

    2016-04-11

    In the past decade, polymer electrolyte membrane fuels (PEMFCs) have been evaluated for both automotive and stationary applications. One of the main obstacles for large scale commercialization of this technology is related to the sluggish oxygen reduction reaction that takes place on the cathode side of fuel cell. Consequently, ongoing research efforts are focused on the design of cathode materials that could improve the kinetics and durability. Majority of these efforts rely on novel synthetic approaches that provide control over the structure, size, shape and composition of catalytically active materials. This article highlights the most recent advances that have beenmore » made to tailor critical parameters of the nanoscale materials in order to achieve more efficient performance of the oxygen reduction reaction (ORR).« less

  2. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Greene, William D.

    2017-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. During the ABEDRR effort, the Dynetics Team has modified flight-proven Apollo-Saturn F-1 engine components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the objectives of this work are to demonstrate combustion stability and measure performance of a 500,000 lbf class Oxidizer-Rich Staged Combustion (ORSC) cycle main injector. A trade study was completed to investigate the feasibility, cost effectiveness, and technical maturity of a domestically-produced engine that could potentially both replace the RD-180 on Atlas V and satisfy NASA SLS payload-to-orbit requirements via an advanced booster application. Engine physical dimensions and performance parameters resulting from this study provide the system level requirements for the ORSC risk reduction test article

  3. Chemical oxygen demand reduction in coffee wastewater through chemical flocculation and advanced oxidation processes.

    PubMed

    Zayas Pérez, Teresa; Geissler, Gunther; Hernandez, Fernando

    2007-01-01

    The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculation and advanced oxidation processes (AOP) had been studied. The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H2O2, UV/O3 and UV/H2O2/O3 processes was determined under acidic conditions. For each of these processes, different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater. Coffee wastewater is characterized by a high chemical oxygen demand (COD) and low total suspended solids. The outcomes of coffee wastewater treatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD, color, and turbidity. It was found that a reduction in COD of 67% could be realized when the coffee wastewater was treated by chemical coagulation-flocculation with lime and coagulant T-1. When coffee wastewater was treated by coagulation-flocculation in combination with UV/H2O2, a COD reduction of 86% was achieved, although only after prolonged UV irradiation. Of the three advanced oxidation processes considered, UV/H2O2, UV/O3 and UV/H2O2/O3, we found that the treatment with UV/H2O2/O3 was the most effective, with an efficiency of color, turbidity and further COD removal of 87%, when applied to the flocculated coffee wastewater.

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  5. Reducing uncertainty in estimating virus reduction by advanced water treatment processes.

    PubMed

    Gerba, Charles P; Betancourt, Walter Q; Kitajima, Masaaki; Rock, Channah M

    2018-04-15

    Treatment of wastewater for potable reuse requires the reduction of enteric viruses to levels that pose no significant risk to human health. Advanced water treatment trains (e.g., chemical clarification, reverse osmosis, ultrafiltration, advanced oxidation) have been developed to provide reductions of viruses to differing levels of regulatory control depending upon the levels of human exposure and associated health risks. Importance in any assessment is information on the concentration and types of viruses in the untreated wastewater, as well as the degree of removal by each treatment process. However, it is critical that the uncertainty associated with virus concentration and removal or inactivation by wastewater treatment be understood to improve these estimates and identifying research needs. We reviewed the critically literature to assess to identify uncertainty in these estimates. Biological diversity within families and genera of viruses (e.g. enteroviruses, rotaviruses, adenoviruses, reoviruses, noroviruses) and specific virus types (e.g. serotypes or genotypes) creates the greatest uncertainty. These aspects affect the methods for detection and quantification of viruses and anticipated removal efficiency by treatment processes. Approaches to reduce uncertainty may include; 1) inclusion of a virus indicator for assessing efficiency of virus concentration and detection by molecular methods for each sample, 2) use of viruses most resistant to individual treatment processes (e.g. adenoviruses for UV light disinfection and reoviruses for chlorination), 3) data on ratio of virion or genome copies to infectivity in untreated wastewater, and 4) assessment of virus removal at field scale treatment systems to verify laboratory and pilot plant data for virus removal. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. 10 CFR 851.30 - Consideration of variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.30 Consideration of variances. (a) Variances shall be granted by the Under Secretary after considering the recommendation of the Chief Health, Safety and Security Officer. The authority to grant a variance cannot be delegated. (b) The application...

  7. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a written...) The CSO may forward the application to the Chief Health, Safety and Security Officer. (2) If the CSO...

  8. Reduction of non-Betalactam Antibiotics COD by Combined Coagulation and Advanced Oxidation Processes.

    PubMed

    Yazdanbakhsh, Ahmad Reza; Mohammadi, Amir Sheikh; Alinejad, Abdol Azim; Hassani, Ghasem; Golmohammadi, Sohrab; Mohseni, Seyed Mohsen; Sardar, Mahdieh; Sarsangi, Vali

    2016-11-01

      The present study evaluates the reduction of antibiotic COD from wastewater by combined coagulation and advanced oxidation processes (AOPS). The reduction of Azithromycin COD by combined coagulation and Fenton-like processes reached a maximum 96.9% at a reaction time of 30 min, dosage of ferric chloride 120 mg/L, dosages of Fe0 and H2O2of 0.36mM/L and 0.38 mM/L, respectively. Also, 97.9% of Clarithromycin COD reduction, was achieved at a reaction time of 30 min, dosage of ferric chloride 120 mg/L, dosages of Fe0 and H2O2 of 0.3 mM/L and 0.3mM/L, respectively. The results of kinetic studies were best fitted to the pseudo first order equation. The results showed a higher rate constant value for combined coagulation and Fenton-like processes [(kap = 0.022 min-1 and half-life time of 31.5 min for Azithromycin) and (kap = 0.023 min-1 and half-life time of 30.1 min for Clarithromycin)].

  9. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  10. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated entity... confidential information in reaching a decision on a variance application. Interested members of the public...

  11. [Perimetric changes in advanced glaucoma].

    PubMed

    Feraru, Crenguta Ioana; Pantalon, Anca

    2011-01-01

    The evaluation of various perimetric aspects in advanced glaucoma stages correlated to morpho-functional changes. MATHERIAL AND METHOD: Retrospective clinical trial over a 10 months time period that included patients with advanced glaucoma stages, for which there have been recorded several computerised visual field tests (central 24-2 strategy, 10-2 strategy with either III or V--Goldman stimulus spot size) along with other morpho-funtional ocular paramaters: VA, lOP optic disk analysis. We included in our study 56 eyes from 45 patients. In most cases 89% it was an open angle glaucoma (either primary or secondary) Mean visual acuity was 0.45 +/- 0.28. Regarding the perimetric deficit 83% had advanced deficit, 9% moderate and 8% early visual changes. As perimetric type of defect we found a majority with general reduction of sensitivity (33 eyes) + ring shape scotoma. In 6 eyes (10.7%) having left only a central isle of vision we performed the central 10-2 strategy with III or V Goldmann stimulus spot size. Statistic analysis showed scarce correlation between the visual acuity and the quantitative perimetric parameters (MD and PSD), and variance analysis found present a multiple correlation parameter p = 0.07 that proves there is no liniary correspondence between the morpho-functional parameters: VA-MD(PSD) and C/D ratio. In advanced glaucoma stages, the perimetric changes are mostly severe. Perimetric evaluation is essential in these stages and needs to be individualised.

  12. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2014-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less

  13. Conceptual design study of advanced acoustic composite nacelle. [for achieving reductions in community noise and operating expense

    NASA Technical Reports Server (NTRS)

    Goodall, R. G.; Painter, G. W.

    1975-01-01

    Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.

  14. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who cannot... reaching a decision on a variance application. Interested members of the public will be allowed a...

  15. Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction

    NASA Astrophysics Data System (ADS)

    Scarnati, Theresa; Gelb, Anne

    2018-04-01

    In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.

  16. Aeronautical fuel conservation possibilities for advanced subsonic transports. [application of aeronautical technology for drag and weight reduction

    NASA Technical Reports Server (NTRS)

    Braslow, A. L.; Whitehead, A. H., Jr.

    1973-01-01

    The anticipated growth of air transportation is in danger of being constrained by increased prices and insecure sources of petroleum-based fuel. Fuel-conservation possibilities attainable through the application of advances in aeronautical technology to aircraft design are identified with the intent of stimulating NASA R and T and systems-study activities in the various disciplinary areas. The material includes drag reduction; weight reduction; increased efficiency of main and auxiliary power systems; unconventional air transport of cargo; and operational changes.

  17. [Advances in microbial genome reduction and modification].

    PubMed

    Wang, Jianli; Wang, Xiaoyuan

    2013-08-01

    Microbial genome reduction and modification are important strategies for constructing cellular chassis used for synthetic biology. This article summarized the essential genes and the methods to identify them in microorganisms, compared various strategies for microbial genome reduction, and analyzed the characteristics of some microorganisms with the minimized genome. This review shows the important role of genome reduction in constructing cellular chassis.

  18. The Effect of Carbonaceous Reductant Selection on Chromite Pre-reduction

    NASA Astrophysics Data System (ADS)

    Kleynhans, E. L. J.; Beukes, J. P.; Van Zyl, P. G.; Bunt, J. R.; Nkosi, N. S. B.; Venter, M.

    2017-04-01

    Ferrochrome (FeCr) production is an energy-intensive process. Currently, the pelletized chromite pre-reduction process, also referred to as solid-state reduction of chromite, is most likely the FeCr production process with the lowest specific electricity consumption, i.e., MWh/t FeCr produced. In this study, the effects of carbonaceous reductant selection on chromite pre-reduction and cured pellet strength were investigated. Multiple linear regression analysis was employed to evaluate the effect of reductant characteristics on the aforementioned two parameters. This yielded mathematical solutions that can be used by FeCr producers to select reductants more optimally in future. Additionally, the results indicated that hydrogen (H)- (24 pct) and volatile content (45.8 pct) were the most significant contributors for predicting variance in pre-reduction and compressive strength, respectively. The role of H within this context is postulated to be linked to the ability of a reductant to release H that can induce reduction. Therefore, contrary to the current operational selection criteria, the authors believe that thermally untreated reductants ( e.g., anthracite, as opposed to coke or char), with volatile contents close to the currently applied specification (to ensure pellet strength), would be optimal, since it would maximize H content that would enhance pre-reduction.

  19. Speed Variance and Its Influence on Accidents.

    ERIC Educational Resources Information Center

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  20. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.

    2015-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the

  1. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR LAND MANAGEMENT AND USE Requirements for Flood Plain Management Regulations § 60.6 Variances and... variances from the criteria set forth in §§ 60.3, 60.4, and 60.5. The issuance of a variance is for flood...

  2. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR LAND MANAGEMENT AND USE Requirements for Flood Plain Management Regulations § 60.6 Variances and... variances from the criteria set forth in §§ 60.3, 60.4, and 60.5. The issuance of a variance is for flood...

  3. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model.

    PubMed

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of a...

  5. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of a...

  6. Advancing Development and Greenhouse Gas Reductions in Vietnam's Wind Sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilello, D.; Katz, J.; Esterly, S.

    2014-09-01

    Clean energy development is a key component of Vietnam's Green Growth Strategy, which establishes a target to reduce greenhouse gas (GHG) emissions from domestic energy activities by 20-30 percent by 2030 relative to a business-as-usual scenario. Vietnam has significant wind energy resources, which, if developed, could help the country reach this target while providing ancillary economic, social, and environmental benefits. Given Vietnam's ambitious clean energy goals and the relatively nascent state of wind energy development in the country, this paper seeks to fulfill two primary objectives: to distill timely and useful information to provincial-level planners, analysts, and project developers asmore » they evaluate opportunities to develop local wind resources; and, to provide insights to policymakers on how coordinated efforts may help advance large-scale wind development, deliver near-term GHG emission reductions, and promote national objectives in the context of a low emission development framework.« less

  7. Advances in volcano monitoring and risk reduction in Latin America

    NASA Astrophysics Data System (ADS)

    McCausland, W. A.; White, R. A.; Lockhart, A. B.; Marso, J. N.; Assitance Program, V. D.; Volcano Observatories, L. A.

    2014-12-01

    We describe results of cooperative work that advanced volcanic monitoring and risk reduction. The USGS-USAID Volcano Disaster Assistance Program (VDAP) was initiated in 1986 after disastrous lahars during the 1985 eruption of Nevado del Ruiz dramatizedthe need to advance international capabilities in volcanic monitoring, eruption forecasting and hazard communication. For the past 28 years, VDAP has worked with our partners to improve observatories, strengthen monitoring networks, and train observatory personnel. We highlight a few of the many accomplishments by Latin American volcano observatories. Advances in monitoring, assessment and communication, and lessons learned from the lahars of the 1985 Nevado del Ruiz eruption and the 1994 Paez earthquake enabled the Servicio Geológico Colombiano to issue timely, life-saving warnings for 3 large syn-eruptive lahars at Nevado del Huila in 2007 and 2008. In Chile, the 2008 eruption of Chaitén prompted SERNAGEOMIN to complete a national volcanic vulnerability assessment that led to a major increase in volcano monitoring. Throughout Latin America improved seismic networks now telemeter data to observatories where the decades-long background rates and types of seismicity have been characterized at over 50 volcanoes. Standardization of the Earthworm data acquisition system has enabled data sharing across international boundaries, of paramount importance during both regional tectonic earthquakes and during volcanic crises when vulnerabilities cross international borders. Sharing of seismic forecasting methods led to the formation of the international organization of Latin American Volcano Seismologists (LAVAS). LAVAS courses and other VDAP training sessions have led to international sharing of methods to forecast eruptions through recognition of precursors and to reduce vulnerabilities from all volcano hazards (flows, falls, surges, gas) through hazard assessment, mapping and modeling. Satellite remote sensing data

  8. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  9. Peptidic β-sheet binding with Congo Red allows both reduction of error variance and signal amplification for immunoassays.

    PubMed

    Wang, Yunyun; Liu, Ye; Deng, Xinli; Cong, Yulong; Jiang, Xingyu

    2016-12-15

    Although conventional enzyme-linked immunosorbent assays (ELISA) and related assays have been widely applied for the diagnosis of diseases, many of them suffer from large error variance for monitoring the concentration of targets over time, and insufficient limit of detection (LOD) for assaying dilute targets. We herein report a readout mode of ELISA based on the binding between peptidic β-sheet structure and Congo Red. The formation of peptidic β-sheet structure is triggered by alkaline phosphatase (ALP). For the detection of P-Selectin which is a crucial indicator for evaluating thrombus diseases in clinic, the 'β-sheet and Congo Red' mode significantly decreases both the error variance and the LOD (from 9.7ng/ml to 1.1 ng/ml) of detection, compared with commercial ELISA (an existing gold-standard method for detecting P-Selectin in clinic). Considering the wide range of ALP-based antibodies for immunoassays, such novel method could be applicable to the analysis of many types of targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Burden reduction of caregivers for users of care services provided by the public long-term care insurance system in Japan.

    PubMed

    Umegaki, Hiroyuki; Yanagawa, Madoka; Nonogaki, Zen; Nakashima, Hirotaka; Kuzuya, Masafumi; Endo, Hidetoshi

    2014-01-01

    We surveyed the care burden of family caregivers, their satisfaction with the services, and whether their care burden was reduced by the introduction of the LTCI care services. We randomly enrolled 3000 of 43,250 residents of Nagoya City aged 65 and over who had been certified as requiring long-term care and who used at least one type of service provided by the public LTCI; 1835 (61.2%) subjects returned the survey. A total of 1015 subjects for whom complete sets of data were available were employed for statistical analysis. Analysis of variance for the continuous variables and χ(2) analysis for that categorical variance were performed. Multiple logistic analysis was performed with the factors with p values of <0.2 in the χ(2) analysis of burden reduction. A total of 68.8% of the caregivers indicated that the care burden was reduced by the introduction of the LTCI care services, and 86.8% of the caregivers were satisfied with the LTCI care services. A lower age of caregivers, a more advanced need classification level, and more satisfaction with the services were independently associated with a reduction of the care burden. In Japanese LTCI, the overall satisfaction of the caregivers appears to be relatively high and is associated with the reduction of the care burden. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Regulatory Risk Reduction for Advanced Reactor Technologies – FY2016 Status and Work Plan Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moe, Wayne Leland

    2016-08-01

    Millions of public and private sector dollars have been invested over recent decades to realize greater efficiency, reliability, and the inherent and passive safety offered by advanced nuclear reactor technologies. However, a major challenge in experiencing those benefits resides in the existing U.S. regulatory framework. This framework governs all commercial nuclear plant construction, operations, and safety issues and is highly large light water reactor (LWR) technology centric. The framework must be modernized to effectively deal with non-LWR advanced designs if those designs are to become part of the U.S energy supply. The U.S. Department of Energy’s (DOE) Advanced Reactor Technologiesmore » (ART) Regulatory Risk Reduction (RRR) initiative, managed by the Regulatory Affairs Department at the Idaho National Laboratory (INL), is establishing a capability that can systematically retire extraneous licensing risks associated with regulatory framework incompatibilities. This capability proposes to rely heavily on the perspectives of the affected regulated community (i.e., commercial advanced reactor designers/vendors and prospective owner/operators) yet remain tuned to assuring public safety and acceptability by regulators responsible for license issuance. The extent to which broad industry perspectives are being incorporated into the proposed framework makes this initiative unique and of potential benefit to all future domestic non-LWR applicants« less

  12. Nonlinear Epigenetic Variance: Review and Simulations

    ERIC Educational Resources Information Center

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  13. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad

  14. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    USDA-ARS?s Scientific Manuscript database

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  15. flowVS: channel-specific variance stabilization in flow cytometry.

    PubMed

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with

  16. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-07

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Hidden Item Variance in Multiple Mini-Interview Scores

    ERIC Educational Resources Information Center

    Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen

    2017-01-01

    The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…

  18. A New Nonparametric Levene Test for Equal Variances

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  19. Why risk is not variance: an expository note.

    PubMed

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  20. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  1. Advanced Risk Reduction Tool (ARRT) Special Case Study Report: Science and Engineering Technical Assessments (SETA) Program

    NASA Technical Reports Server (NTRS)

    Kirsch, Paul J.; Hayes, Jane; Zelinski, Lillian

    2000-01-01

    This special case study report presents the Science and Engineering Technical Assessments (SETA) team's findings for exploring the correlation between the underlying models of Advanced Risk Reduction Tool (ARRT) relative to how it identifies, estimates, and integrates Independent Verification & Validation (IV&V) activities. The special case study was conducted under the provisions of SETA Contract Task Order (CTO) 15 and the approved technical approach documented in the CTO-15 Modification #1 Task Project Plan.

  2. Variance adaptation in navigational decision making

    NASA Astrophysics Data System (ADS)

    Gershow, Marc; Gepner, Ruben; Wolk, Jason; Wadekar, Digvijay

    Drosophila larvae navigate their environments using a biased random walk strategy. A key component of this strategy is the decision to initiate a turn (change direction) in response to declining conditions. We modeled this decision as the output of a Linear-Nonlinear-Poisson cascade and used reverse correlation with visual and fictive olfactory stimuli to find the parameters of this model. Because the larva responds to changes in stimulus intensity, we used stimuli with uncorrelated normally distributed intensity derivatives, i.e. Brownian processes, and took the stimulus derivative as the input to our LNP cascade. In this way, we were able to present stimuli with 0 mean and controlled variance. We found that the nonlinear rate function depended on the variance in the stimulus input, allowing larvae to respond more strongly to small changes in low-noise compared to high-noise environments. We measured the rate at which the larva adapted its behavior following changes in stimulus variance, and found that larvae adapted more quickly to increases in variance than to decreases, consistent with the behavior of an optimal Bayes estimator. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  3. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  4. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  5. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    NASA Astrophysics Data System (ADS)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  6. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR... risk and will not be modified by the granting of a variance. The community, after examining the... review a community's findings justifying the granting of variances, and if that review indicates a...

  7. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR... risk and will not be modified by the granting of a variance. The community, after examining the... review a community's findings justifying the granting of variances, and if that review indicates a...

  8. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA

  9. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  10. Effective dimension reduction for sparse functional data

    PubMed Central

    YAO, F.; LEI, E.; WU, Y.

    2015-01-01

    Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293

  11. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... is unable to meet the time requirements for which the variance is requested; and (2) A revised UR...

  12. One-shot estimate of MRMC variance: AUC.

    PubMed

    Gallas, Brandon D

    2006-03-01

    One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.

  13. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...

  14. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...

  15. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...

  16. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  17. Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata

    PubMed Central

    Sztepanacz, Jacqueline L.; Blows, Mark W.

    2015-01-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700

  18. Flagged uniform particle splitting for variance reduction in proton and carbon ion track-structure simulations

    NASA Astrophysics Data System (ADS)

    Ramos-Méndez, José; Schuemann, Jan; Incerti, Sebastien; Paganetti, Harald; Schulte, Reinhard; Faddegon, Bruce

    2017-08-01

    deviations) for endpoints (1) and (2), within 2% (1 standard deviation) for endpoint (3). In conclusion, standard particle splitting variance reduction techniques can be successfully implemented in Monte Carlo track structure codes.

  19. CMB-S4 and the hemispherical variance anomaly

    NASA Astrophysics Data System (ADS)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  20. Modeling Heterogeneous Variance-Covariance Components in Two-Level Models

    ERIC Educational Resources Information Center

    Leckie, George; French, Robert; Charlton, Chris; Browne, William

    2014-01-01

    Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…

  1. Removal of PCBs in contaminated soils by means of chemical reduction and advanced oxidation processes.

    PubMed

    Rybnikova, V; Usman, M; Hanna, K

    2016-09-01

    Although the chemical reduction and advanced oxidation processes have been widely used individually, very few studies have assessed the combined reduction/oxidation approach for soil remediation. In the present study, experiments were performed in spiked sand and historically contaminated soil by using four synthetic nanoparticles (Fe(0), Fe/Ni, Fe3O4, Fe3 - x Ni x O4). These nanoparticles were tested firstly for reductive transformation of polychlorinated biphenyls (PCBs) and then employed as catalysts to promote chemical oxidation reactions (H2O2 or persulfate). Obtained results indicated that bimetallic nanoparticles Fe/Ni showed the highest efficiency in reduction of PCB28 and PCB118 in spiked sand (97 and 79 %, respectively), whereas magnetite (Fe3O4) exhibited a high catalytic stability during the combined reduction/oxidation approach. In chemical oxidation, persulfate showed higher PCB degradation extent than hydrogen peroxide. As expected, the degradation efficiency was found to be limited in historically contaminated soil, where only Fe(0) and Fe/Ni particles exhibited reductive capability towards PCBs (13 and 18 %). In oxidation step, the highest degradation extents were obtained in presence of Fe(0) and Fe/Ni (18-19 %). The increase in particle and oxidant doses improved the efficiency of treatment, but overall degradation extents did not exceed 30 %, suggesting that only a small part of PCBs in soil was available for reaction with catalyst and/or oxidant. The use of organic solvent or cyclodextrin to improve the PCB availability in soil did not enhance degradation efficiency, underscoring the strong impact of soil matrix. Moreover, a better PCB degradation was observed in sand spiked with extractable organic matter separated from contaminated soil. In contrast to fractions with higher particle size (250-500 and <500 μm), no PCB degradation was observed in the finest fraction (≤250 μm) having higher organic matter content. These findings

  2. Bootstrap Estimation and Testing for Variance Equality.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Algina, James

    The purpose of this study was to develop a single procedure for comparing population variances which could be used for distribution forms. Bootstrap methodology was used to estimate the variability of the sample variance statistic when the population distribution was normal, platykurtic and leptokurtic. The data for the study were generated and…

  3. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR LAND MANAGEMENT AND USE Requirements for Flood Plain Management Regulations § 60.6 Variances and exceptions. (a... the criteria set forth in §§ 60.3, 60.4, and 60.5. The issuance of a variance is for flood plain...

  4. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste...

  5. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste...

  6. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  7. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Save money by understanding variance and tolerancing.

    PubMed

    Stuart, K

    2007-01-01

    Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.

  9. Conceptual Complexity and the Bias/Variance Tradeoff

    ERIC Educational Resources Information Center

    Briscoe, Erica; Feldman, Jacob

    2011-01-01

    In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…

  10. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. Noise Reduction Potential of Large, Over-the-Wing Mounted, Advanced Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2000-01-01

    As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Indeed, the noise goal for NASA's Aeronautics Enterprise calls for technologies that will help to provide a 20 EPNdB reduction relative to today's levels by the year 2022. Further, the large fan diameters of modem, increasingly higher bypass ratio engines pose a significant packaging and aircraft installation challenge. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large, ultra high bypass ratio cycles to continue, this over-the-wing design is believed to offer noise shielding benefits to observers on the ground. This paper describes the analytical certification noise predictions of a notional, long haul, commercial quadjet transport with advanced, high bypass engines mounted above the wing.

  12. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  13. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  14. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    PubMed

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  15. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  16. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  18. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  19. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  20. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    NASA Astrophysics Data System (ADS)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  1. Means and Variances without Calculus

    ERIC Educational Resources Information Center

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  2. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time...

  3. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time...

  4. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time...

  5. Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.

  6. Nonsymbolic number and cumulative area representations contribute shared and unique variance to symbolic math competence

    PubMed Central

    Lourenco, Stella F.; Bonny, Justin W.; Fernandez, Edmund P.; Rao, Sonia

    2012-01-01

    Humans and nonhuman animals share the capacity to estimate, without counting, the number of objects in a set by relying on an approximate number system (ANS). Only humans, however, learn the concepts and operations of symbolic mathematics. Despite vast differences between these two systems of quantification, neural and behavioral findings suggest functional connections. Another line of research suggests that the ANS is part of a larger, more general system of magnitude representation. Reports of cognitive interactions and common neural coding for number and other magnitudes such as spatial extent led us to ask whether, and how, nonnumerical magnitude interfaces with mathematical competence. On two magnitude comparison tasks, college students estimated (without counting or explicit calculation) which of two arrays was greater in number or cumulative area. They also completed a battery of standardized math tests. Individual differences in both number and cumulative area precision (measured by accuracy on the magnitude comparison tasks) correlated with interindividual variability in math competence, particularly advanced arithmetic and geometry, even after accounting for general aspects of intelligence. Moreover, analyses revealed that whereas number precision contributed unique variance to advanced arithmetic, cumulative area precision contributed unique variance to geometry. Taken together, these results provide evidence for shared and unique contributions of nonsymbolic number and cumulative area representations to formally taught mathematics. More broadly, they suggest that uniquely human branches of mathematics interface with an evolutionarily primitive general magnitude system, which includes partially overlapping representations of numerical and nonnumerical magnitude. PMID:23091023

  7. Naive Analysis of Variance

    ERIC Educational Resources Information Center

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  8. Evaluation of Mean and Variance Integrals without Integration

    ERIC Educational Resources Information Center

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  9. Individual differences in personality traits reflect structural variance in specific brain regions.

    PubMed

    Gardini, Simona; Cloninger, C Robert; Venneri, Annalena

    2009-06-30

    Personality dimensions such as novelty seeking (NS), harm avoidance (HA), reward dependence (RD) and persistence (PER) are said to be heritable, stable across time and dependent on genetic and neurobiological factors. Recently a better understanding of the relationship between personality traits and brain structures/systems has become possible due to advances in neuroimaging techniques. This Magnetic Resonance Imaging (MRI) study investigated if individual differences in these personality traits reflected structural variance in specific brain regions. A large sample of eighty five young adult participants completed the Three-dimensional Personality Questionnaire (TPQ) and had their brain imaged with MRI. A voxel-based correlation analysis was carried out between individuals' personality trait scores and grey matter volume values extracted from 3D brain scans. NS correlated positively with grey matter volume in frontal and posterior cingulate regions. HA showed a negative correlation with grey matter volume in orbito-frontal, occipital and parietal structures. RD was negatively correlated with grey matter volume in the caudate nucleus and in the rectal frontal gyrus. PER showed a positive correlation with grey matter volume in the precuneus, paracentral lobule and parahippocampal gyrus. These results indicate that individual differences in the main personality dimensions of NS, HA, RD and PER, may reflect structural variance in specific brain areas.

  10. Variance in binary stellar population synthesis

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  11. Tumor Volume Reduction Rate After Preoperative Chemoradiotherapy as a Prognostic Factor in Locally Advanced Rectal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeo, Seung-Gu; Department of Radiation Oncology, Soonchunhyang University College of Medicine, Cheonan; Kim, Dae Yong, E-mail: radiopiakim@hanmail.net

    2012-02-01

    Purpose: To investigate the prognostic significance of tumor volume reduction rate (TVRR) after preoperative chemoradiotherapy (CRT) in locally advanced rectal cancer (LARC). Methods and Materials: In total, 430 primary LARC (cT3-4) patients who were treated with preoperative CRT and curative radical surgery between May 2002 and March 2008 were analyzed retrospectively. Pre- and post-CRT tumor volumes were measured using three-dimensional region-of-interest MR volumetry. Tumor volume reduction rate was determined using the equation TVRR (%) = (pre-CRT tumor volume - post-CRT tumor volume) Multiplication-Sign 100/pre-CRT tumor volume. The median follow-up period was 64 months (range, 27-99 months) for survivors. Endpoints weremore » disease-free survival (DFS) and overall survival (OS). Results: The median TVRR was 70.2% (mean, 64.7% {+-} 22.6%; range, 0-100%). Downstaging (ypT0-2N0M0) occurred in 183 patients (42.6%). The 5-year DFS and OS rates were 77.7% and 86.3%, respectively. In the analysis that included pre-CRT and post-CRT tumor volumes and TVRR as continuous variables, only TVRR was an independent prognostic factor. Tumor volume reduction rate was categorized according to a cutoff value of 45% and included with clinicopathologic factors in the multivariate analysis; ypN status, circumferential resection margin, and TVRR were significant prognostic factors for both DFS and OS. Conclusions: Tumor volume reduction rate was a significant prognostic factor in LARC patients receiving preoperative CRT. Tumor volume reduction rate data may be useful for tailoring surgery and postoperative adjuvant therapy after preoperative CRT.« less

  12. 10 CFR 851.32 - Action on variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... cited by the Chief Health, Safety and Security Officer; or (ii) Forward to the Under Secretary the... DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.32 Action on variance requests. (a) Procedures for an approval recommendation. (1) If the Chief Health, Safety and Security Officer recommends...

  13. A Mean variance analysis of arbitrage portfolios

    NASA Astrophysics Data System (ADS)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  14. Jackknife for Variance Analysis of Multifactor Experiments.

    DTIC Science & Technology

    1982-05-01

    variance-covariance matrix is generated y a subroutine named CORAN (UNIVAC, 1969). The jackknife variances are then punched on computer cards in the same...LEVEL OF: InMte CALL cORAN (oaILa.NSUR.NOAY.D,*OXflRRORR.PCOF.2K.1’)I WRITE IP97111 )1RRN.4 .1:NDAY) 0 a 3fill1UR I .’t UN 001f’..1uŔ:1 .w100710n

  15. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  16. Gender Variance and Educational Psychology: Implications for Practice

    ERIC Educational Resources Information Center

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  17. flowVS: channel-specific variance stabilization in flow cytometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  18. flowVS: channel-specific variance stabilization in flow cytometry

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  19. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    NASA Astrophysics Data System (ADS)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  20. An Analysis of Variance Framework for Matrix Sampling.

    ERIC Educational Resources Information Center

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  1. Model-based variance-stabilizing transformation for Illumina microarray data.

    PubMed

    Lin, Simon M; Du, Pan; Huber, Wolfgang; Kibbe, Warren A

    2008-02-01

    Variance stabilization is a step in the preprocessing of microarray data that can greatly benefit the performance of subsequent statistical modeling and inference. Due to the often limited number of technical replicates for Affymetrix and cDNA arrays, achieving variance stabilization can be difficult. Although the Illumina microarray platform provides a larger number of technical replicates on each array (usually over 30 randomly distributed beads per probe), these replicates have not been leveraged in the current log2 data transformation process. We devised a variance-stabilizing transformation (VST) method that takes advantage of the technical replicates available on an Illumina microarray. We have compared VST with log2 and Variance-stabilizing normalization (VSN) by using the Kruglyak bead-level data (2006) and Barnes titration data (2005). The results of the Kruglyak data suggest that VST stabilizes variances of bead-replicates within an array. The results of the Barnes data show that VST can improve the detection of differentially expressed genes and reduce false-positive identifications. We conclude that although both VST and VSN are built upon the same model of measurement noise, VST stabilizes the variance better and more efficiently for the Illumina platform by leveraging the availability of a larger number of within-array replicates. The algorithms and Supplementary Data are included in the lumi package of Bioconductor, available at: www.bioconductor.org.

  2. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    USGS Publications Warehouse

    Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  3. Energy Saving Melting and Revert Reduction Technology (Energy SMARRT): Manufacturing Advanced Engineered Components Using Lost Foam Casting Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Littleton, Harry; Griffin, John

    2011-07-31

    This project was a subtask of Energy Saving Melting and Revert Reduction Technology (Energy SMARRT) Program. Through this project, technologies, such as computer modeling, pattern quality control, casting quality control and marketing tools, were developed to advance the Lost Foam Casting process application and provide greater energy savings. These technologies have improved (1) production efficiency, (2) mechanical properties, and (3) marketability of lost foam castings. All three reduce energy consumption in the metals casting industry. This report summarizes the work done on all tasks in the period of January 1, 2004 through June 30, 2011. Current (2011) annual energy savingmore » estimates based on commercial introduction in 2011 and a market penetration of 97% by 2020 is 5.02 trillion BTU's/year and 6.46 trillion BTU's/year with 100% market penetration by 2023. Along with these energy savings, reduction of scrap and improvement in casting yield will result in a reduction of the environmental emissions associated with the melting and pouring of the metal which will be saved as a result of this technology. The average annual estimate of CO2 reduction per year through 2020 is 0.03 Million Metric Tons of Carbon Equivalent (MM TCE).« less

  4. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  5. Effect of Two Advanced Noise Reduction Technologies on the Aerodynamic Performance of an Ultra High Bypass Ratio Fan

    NASA Technical Reports Server (NTRS)

    Hughes, Christoper E.; Gazzaniga, John A.

    2013-01-01

    A wind tunnel experiment was conducted in the NASA Glenn Research Center anechoic 9- by 15-Foot Low-Speed Wind Tunnel to investigate two new advanced noise reduction technologies in support of the NASA Fundamental Aeronautics Program Subsonic Fixed Wing Project. The goal of the experiment was to demonstrate the noise reduction potential and effect on fan model performance of the two noise reduction technologies in a scale model Ultra-High Bypass turbofan at simulated takeoff and approach aircraft flight speeds. The two novel noise reduction technologies are called Over-the-Rotor acoustic treatment and Soft Vanes. Both technologies were aimed at modifying the local noise source mechanisms of the fan tip vortex/fan case interaction and the rotor wake-stator interaction. For the Over-the-Rotor acoustic treatment, two noise reduction configurations were investigated. The results showed that the two noise reduction technologies, Over-the-Rotor and Soft Vanes, were able to reduce the noise level of the fan model, but the Over-the-Rotor configurations had a significant negative impact on the fan aerodynamic performance; the loss in fan aerodynamic efficiency was between 2.75 to 8.75 percent, depending on configuration, compared to the conventional solid baseline fan case rubstrip also tested. Performance results with the Soft Vanes showed that there was no measurable change in the corrected fan thrust and a 1.8 percent loss in corrected stator vane thrust, which resulted in a total net thrust loss of approximately 0.5 percent compared with the baseline reference stator vane set.

  6. Small Drinking Water System Variances

    EPA Pesticide Factsheets

    Small system variances allow a small system to install and maintain technology that can remove a contaminant to the maximum extent that is affordable and protective of public health in lieu of technology that can achieve compliance with the regulation.

  7. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  8. The evolution and consequences of sex-specific reproductive variance.

    PubMed

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.

  9. The Evolution and Consequences of Sex-Specific Reproductive Variance

    PubMed Central

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130

  10. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  11. The Genealogical Consequences of Fecundity Variance Polymorphism

    PubMed Central

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  12. Modeling and Recovery of Iron (Fe) from Red Mud by Coal Reduction

    NASA Astrophysics Data System (ADS)

    Zhao, Xiancong; Li, Hongxu; Wang, Lei; Zhang, Lifeng

    Recovery of Fe from red mud has been studied using statistically designed experiments. The effects of three factors, namely: reduction temperature, reduction time and proportion of additive on recovery of Fe have been investigated. Experiments have been carried out using orthogonal central composite design and factorial design methods. A model has been obtained through variance analysis at 92.5% confidence level.

  13. Aircraft Noise Reduction Subproject Overview

    NASA Technical Reports Server (NTRS)

    Fernandez, Hamilton; Nark, Douglas M.; Van Zante, Dale E.

    2016-01-01

    The material presents highlights of propulsion and airframe noise research being completed for the Advanced Air Transport Technology Project. The basis of noise reduction plans along with representative work for the airframe, propulsion, and propulsion-airframe integration is discussed for the Aircraft Noise reduction Subproject.

  14. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f...

  15. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    PubMed

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  16. Recent advances in reduction methods for nonlinear problems. [in structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1981-01-01

    Status and some recent developments in the application of reduction methods to nonlinear structural mechanics problems are summarized. The aspects of reduction methods discussed herein include: (1) selection of basis vectors in nonlinear static and dynamic problems, (2) application of reduction methods in nonlinear static analysis of structures subjected to prescribed edge displacements, and (3) use of reduction methods in conjunction with mixed finite element models. Numerical examples are presented to demonstrate the effectiveness of reduction methods in nonlinear problems. Also, a number of research areas which have high potential for application of reduction methods are identified.

  17. Reduction in advanced breast cancer after introduction of a mammography screening program in Tyrol/Austria.

    PubMed

    Oberaigner, W; Geiger-Gritsch, Sabine; Edlinger, M; Daniaux, M; Knapp, R; Hubalek, M; Siebert, U; Marth, C; Buchberger, W

    2017-06-01

    We analysed all female breast cancer (BC) cases in Tyrol/Austria regarding the shift in cancer characteristics, especially the shift in advanced BC, for the group exposed to screening as compared to the group unexposed to screening. The analysis was based on all BC cases diagnosed in women aged 40-69 years, resident in Tyrol, and diagnosed between 2009 and 2013. The data were linked to the Tyrolean mammography screening programme database to classify BC cases as "exposed to screening" or "unexposed to screening". Age-adjusted relative risks (RR) were estimated by relating the exposed to the unexposed group. In a total of about 145,000 women aged 40-69 years living in Tyrol during the study period, 1475 invasive BC cases were registered. We estimated an age-adjusted relative risk (RR) for tumour size ≥ 21 mm of 0.72 (95% confidence interval (CI) 0.60 to 0.86), for metastatic BC of 0.27 (95% CI 0.17 to 0.46) and for advanced BC of 0.83 (95% CI 0.71 to 0.96), each comparing those exposed to those unexposed to screening, respectively. In our population-based registry analysis we observed that participation in the mammography screening programme in Tyrol is associated with a 28% decrease in risk for BC cases with tumour size ≥ 21 mm and a 17% decrease in risk for advanced BC. We therefore expect the Tyrolean mammography programme to show a reduction in BC mortality. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time..., mental hospital, and ICF located within a 50-mile radius of the facility; (e) The distance and average...

  19. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (formerly the Radiation Control for Health and Safety Act of 1968), and: (i) The scope of the requested... FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or more...

  20. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  1. Variance decomposition in stochastic simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less

  2. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  3. Host nutrition alters the variance in parasite transmission potential

    PubMed Central

    Vale, Pedro F.; Choisy, Marc; Little, Tom J.

    2013-01-01

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts. PMID:23407498

  4. Host nutrition alters the variance in parasite transmission potential.

    PubMed

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  5. Robust versus consistent variance estimators in marginal structural Cox models.

    PubMed

    Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris

    2018-06-11

    In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Advanced Subsonic Airplane Design and Economic Studies

    NASA Technical Reports Server (NTRS)

    Liebeck, Robert H.; Andrastek, Donald A.; Chau, Johnny; Girvin, Raquel; Lyon, Roger; Rawdon, Blaine K.; Scott, Paul W.; Wright, Robert A.

    1995-01-01

    A study was made to examine the effect of advanced technology engines on the performance of subsonic airplanes and provide a vision of the potential which these advanced engines offered. The year 2005 was selected as the entry-into-service (EIS) date for engine/airframe combination. A set of four airplane classes (passenger and design range combinations) that were envisioned to span the needs for the 2005 EIS period were defined. The airframes for all classes were designed and sized using 2005 EIS advanced technology. Two airplanes were designed and sized for each class: one using current technology (1995) engines to provide a baseline, and one using advanced technology (2005) engines. The resulting engine/airframe combinations were compared and evaluated on the basis on sensitivity to basic engine performance parameters (e.g. SFC and engine weight) as well as DOC+I. The advanced technology engines provided significant reductions in fuel burn, weight, and wing area. Average values were as follows: reduction in fuel burn = 18%, reduction in wing area = 7%, and reduction in TOGW = 9%. Average DOC+I reduction was 3.5% using the pricing model based on payload-range index and 5% using the pricing model based on airframe weight. Noise and emissions were not considered.

  7. 40 CFR 142.302 - Who can issue a small system variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... General Provisions § 142.302 Who can issue a small system variance? A small system variance under this...

  8. Evaluation of surface and shallow depth dose reductions using a Superflab bolus during conventional and advanced external beam radiotherapy.

    PubMed

    Yoon, Jihyung; Xie, Yibo; Zhang, Rui

    2018-03-01

    The purpose of this study was to evaluate a methodology to reduce scatter and leakage radiations to patients' surface and shallow depths during conventional and advanced external beam radiotherapy. Superflab boluses of different thicknesses were placed on top of a stack of solid water phantoms, and the bolus effect on surface and shallow depth doses for both open and intensity-modulated radiotherapy (IMRT) beams was evaluated using thermoluminescent dosimeters and ion chamber measurements. Contralateral breast dose reduction caused by the bolus was evaluated by delivering clinical postmastectomy radiotherapy (PMRT) plans to an anthropomorphic phantom. For the solid water phantom measurements, surface dose reduction caused by the Superflab bolus was achieved only in out-of-field area and on the incident side of the beam, and the dose reduction increased with bolus thickness. The dose reduction caused by the bolus was more significant at closer distances from the beam. Most of the dose reductions occurred in the first 2-cm depth and stopped at 4-cm depth. For clinical PMRT treatment plans, surface dose reductions using a 1-cm Superflab bolus were up to 31% and 62% for volumetric-modulated arc therapy and 4-field IMRT, respectively, but there was no dose reduction for Tomotherapy. A Superflab bolus can be used to reduce surface and shallow depth doses during external beam radiotherapy when it is placed out of the beam and on the incident side of the beam. Although we only validated this dose reduction strategy for PMRT treatments, it is applicable to any external beam radiotherapy and can potentially reduce patients' risk of developing radiation-induced side effects. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Gear systems for advanced turboprops

    NASA Technical Reports Server (NTRS)

    Wagner, Douglas A.

    1987-01-01

    A new generation of transport aircraft will be powered by efficient, advanced turboprop propulsion systems. Systems that develop 5,000 to 15,000 horsepower have been studied. Reduction gearing for these advanced propulsion systems is discussed. Allison Gas Turbine Division's experience with the 5,000 horsepower reduction gearing for the T56 engine is reviewed and the impact of that experience on advanced gear systems is considered. The reliability needs for component design and development are also considered. Allison's experience and their research serve as a basis on which to characterize future gear systems that emphasize low cost and high reliability.

  10. A stochastic hybrid model for pricing forward-start variance swaps

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah

    2017-11-01

    Recently, market players have been exposed to the astounding increase in the trading volume of variance swaps. In this paper, the forward-start nature of a variance swap is being inspected, where hybridizations of equity and interest rate models are used to evaluate the price of discretely-sampled forward-start variance swaps. The Heston stochastic volatility model is being extended to incorporate the dynamics of the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. This is essential since previous studies on variance swaps were mainly focusing on instantaneous-start variance swaps without considering the interest rate effects. This hybrid model produces an efficient semi-closed form pricing formula through the development of forward characteristic functions. The performance of this formula is investigated via simulations to demonstrate how the formula performs for different sampling times and against the real market scenario. Comparison done with the Monte Carlo simulation which was set as our main reference point reveals that our pricing formula gains almost the same precision in a shorter execution time.

  11. On the Endogeneity of the Mean-Variance Efficient Frontier.

    ERIC Educational Resources Information Center

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  12. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    PubMed

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (<0.007). Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance

  13. A Versatile Omnibus Test for Detecting Mean and Variance Heterogeneity

    PubMed Central

    Bailey, Matthew; Kauwe, John S. K.; Maxwell, Taylor J.

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (GxG), or gene-by-environment (GxE) interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRTMV) or either effect alone (LRTM or LRTV) in the presence of covariates. Using extensive simulations for our method and others we found that all parametric tests were sensitive to non-normality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant we demonstrate how linkage disequilibrium (LD) can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D’ and relatively low r2 values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect gene-by-gene interactions and also how vQTL are related to relationship loci (rQTL) and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait. PMID:24482837

  14. Variance and covariance estimates for weaning weight of Senepol cattle.

    PubMed

    Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S

    1991-10-01

    Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively.

  15. Estimation of Additive, Dominance, and Imprinting Genetic Variance Using Genomic Data

    PubMed Central

    Lopes, Marcos S.; Bastiaansen, John W. M.; Janss, Luc; Knol, Egbert F.; Bovenhuis, Henk

    2015-01-01

    Traditionally, exploration of genetic variance in humans, plants, and livestock species has been limited mostly to the use of additive effects estimated using pedigree data. However, with the development of dense panels of single-nucleotide polymorphisms (SNPs), the exploration of genetic variation of complex traits is moving from quantifying the resemblance between family members to the dissection of genetic variation at individual loci. With SNPs, we were able to quantify the contribution of additive, dominance, and imprinting variance to the total genetic variance by using a SNP regression method. The method was validated in simulated data and applied to three traits (number of teats, backfat, and lifetime daily gain) in three purebred pig populations. In simulated data, the estimates of additive, dominance, and imprinting variance were very close to the simulated values. In real data, dominance effects account for a substantial proportion of the total genetic variance (up to 44%) for these traits in these populations. The contribution of imprinting to the total phenotypic variance of the evaluated traits was relatively small (1–3%). Our results indicate a strong relationship between additive variance explained per chromosome and chromosome length, which has been described previously for other traits in other species. We also show that a similar linear relationship exists for dominance and imprinting variance. These novel results improve our understanding of the genetic architecture of the evaluated traits and shows promise to apply the SNP regression method to other traits and species, including human diseases. PMID:26438289

  16. Biologic lung volume reduction in advanced upper lobe emphysema: phase 2 results.

    PubMed

    Criner, Gerard J; Pinto-Plata, Victor; Strange, Charlie; Dransfield, Mark; Gotfried, Mark; Leeds, William; McLennan, Geoffrey; Refaely, Yael; Tewari, Sanjiv; Krasna, Mark; Celli, Bartolome

    2009-05-01

    Biologic lung volume reduction (BioLVR) is a new endobronchial treatment for advanced emphysema that reduces lung volume through tissue remodeling. Assess the safety and therapeutic dose of BioLVR hydrogel in upper lobe predominant emphysema. Open-labeled, multicenter phase 2 dose-ranging studies were performed with BioLVR hydrogel administered to eight subsegmental sites (four in each upper lobe) involving: (1) low-dose treatment (n = 28) with 10 ml per site (LD); and (2) high-dose treatment (n = 22) with 20 ml per site (HD). Safety was assessed by the incidence of serious medical complications. Efficacy was assessed by change from baseline in pulmonary function tests, dyspnea score, 6-minute walk distance, and health-related quality of life. After treatment there were no deaths and four serious treatment-related complications. A reduction in residual volume to TLC ratio at 12 weeks (primary efficacy outcome) was achieved with both LD (-6.4 +/- 9.3%; P = 0.002) and HD (-5.5 +/- 9.4%; P = 0.028) treatments. Improvements in pulmonary function in HD (6 mo: DeltaFEV(1) = +15.6%; P = 0.002; DeltaFVC = +9.1%; P = 0.034) were greater than in LD patients (6 mo: DeltaFEV(1) = +6.7%; P = 0.021; DeltaFVC = +5.1%; P = 0.139). LD- and HD-treated groups both demonstrated improved symptom scores and health-related quality of life. BioLVR improves physiology and functional outcomes up to 6 months with an acceptable safety profile in upper lobe predominant emphysema. Overall improvement was greater and responses more durable with 20 ml per site than 10 ml per site dosing. Clinical trial registered with www.clinicaltrials.gov (NCT 00435253 and NCT 00515164).

  17. Estimation of Variance in the Case of Complex Samples.

    ERIC Educational Resources Information Center

    Groenewald, A. C.; Stoker, D. J.

    In a complex sampling scheme it is desirable to select the primary sampling units (PSUs) without replacement to prevent duplications in the sample. Since the estimation of the sampling variances is more complicated when the PSUs are selected without replacement, L. Kish (1965) recommends that the variance be calculated using the formulas…

  18. Estimate of mortality reduction with implementation of advanced automatic collision notification.

    PubMed

    Lee, Ellen; Wu, Jingshu; Kang, Thomas; Craig, Matthew

    2017-05-29

    Advanced Automatic Collision Notification (AACN) is a system on a motor vehicle that notifies a public safety answering point (PSAP), either directly or through a third party, that the vehicle has had a crash. AACN systems enable earlier notification of a motor vehicle crash and provide an injury prediction that can help dispatchers and first responders make better decisions about how and where to transport the patient, thus getting the patient to definitive care sooner. The purposes of the current research are to identify the target population that could benefit from AACN, and to develop a reasonable estimate range of potential lives saved with implementation of AACN within the vehicle fleet. Data from the Fatality Analysis Reporting System (FARS) years 2009-2015 and National Automotive Sampling System-Crashworthiness Data System (NASS-CDS) years 2000-2015 were obtained. FARS data were used to determine absolute estimates of the target population who may receive benefit from AACN. These estimates accounted for a number of factors, such as whether a fatal occupant had nearby access to a trauma center and also was correctly identified by the injury severity prediction algorithm as having a "high probability of severe injury." NASS-CDS data were used to provide relative comparisons among subsets of the population. Specifically, relative survival rate ratios between occupants treated at trauma centers versus at non-trauma centers were determined using the nonparametric Kaplan-Meier estimator. Finally, the fatality reduction rate associated with trauma center care was combined with the previously published fatality reduction rate for faster notification time to develop a range for possible lives saved. Two relevant target populations were identified. A larger subset of 6893 fatalities can benefit only from earlier notification associated with AACN. A smaller subgroup of between 1495 and 2330 fatalities can benefit from both earlier notification and change in treatment

  19. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  20. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  1. The mean and variance of phylogenetic diversity under rarefaction

    PubMed Central

    Matsen, Frederick A.

    2013-01-01

    Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701

  2. The mean and variance of phylogenetic diversity under rarefaction.

    PubMed

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  3. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... compliance plan proposed by the applicant can reasonably be implemented and will achieve compliance as...

  4. Liposuction for Advanced Lymphedema: A Multidisciplinary Approach for Complete Reduction of Arm and Leg Swelling.

    PubMed

    Boyages, John; Kastanias, Katrina; Koelmeyer, Louise A; Winch, Caleb J; Lam, Thomas C; Sherman, Kerry A; Munnoch, David Alex; Brorson, Håkan; Ngo, Quan D; Heydon-White, Asha; Magnussen, John S; Mackie, Helen

    2015-12-01

    This research describes and evaluates a liposuction surgery and multidisciplinary rehabilitation approach for advanced lymphedema of the upper and lower extremities. A prospective clinical study was conducted at an Advanced Lymphedema Assessment Clinic (ALAC) comprised of specialists in plastic surgery, rehabilitation, imaging, oncology, and allied health, at Macquarie University, Australia. Between May 2012 and 31 May 2014, a total of 104 patients attended the ALAC. Eligibility criteria for liposuction included (i) unilateral, non-pitting, International Society of Lymphology stage II/III lymphedema; (ii) limb volume difference greater than 25 %; and (iii) previously ineffective conservative therapies. Of 55 eligible patients, 21 underwent liposuction (15 arm, 6 leg) and had at least 3 months postsurgical follow-up (85.7 % cancer-related lymphedema). Liposuction was performed under general anesthesia using a published technique, and compression garments were applied intraoperatively and advised to be worn continuously thereafter. Limb volume differences, bioimpedance spectroscopy (L-Dex), and symptom and functional measurements (using the Patient-Specific Functional Scale) were taken presurgery and 4 weeks postsurgery, and then at 3, 6, 9, and 12 months postsurgery. Mean presurgical limb volume difference was 45.1 % (arm 44.2 %; leg 47.3 %). This difference reduced to 3.8 % (arm 3.6 %; leg 4.3 %) by 6 months postsurgery, a mean percentage volume reduction of 89.6 % (arm 90.2 %; leg 88.2 %) [p < 0.001]. All patients had improved symptoms and function. Bioimpedance spectroscopy showed reduced but ongoing extracellular fluid, consistent with the underlying lymphatic pathology. Liposuction is a safe and effective option for carefully selected patients with advanced lymphedema. Assessment, treatment, and follow-up by a multidisciplinary team is essential.

  5. Validating Variance Similarity Functions in the Entrainment Zone

    NASA Astrophysics Data System (ADS)

    Osman, M.; Turner, D. D.; Heus, T.; Newsom, R. K.

    2017-12-01

    In previous work, the water vapor variance in the entrainment zone was proposed to be proportional to the convective velocity scale, gradient water vapor mixing ratio and the Brunt-Vaisala frequency in the interfacial layer, while the variance of the vertical wind at in the entrainment zone was defined in terms of the convective velocity scale. The variances in the entrainment zone have been hypothesized to depend on two distinct functions, which also depend on the Richardson number. To the best of our knowledge, these hypotheses have never been tested observationally. Simultaneous measurements of the Eddy correlation surface flux, wind shear profiles from wind profilers, and variance profile measurements of vertical motions and water vapor by Doppler and Raman lidars, respectively, provide a unique opportunity to thoroughly examine the functions used in defining the variances and validate them. These observations were made over the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site. We have identified about 30 cases from 2016 during which the convective boundary layer (CBL) is quasi-stationary and well mixed for at least 2 hours. The vertical profiles of turbulent fluctuations of the vertical wind and water vapor have been derived using an auto covariance technique to separate out the instrument random error to a set of 2-h period time series. The error analysis of the lidars observations demonstrates that the lidars are capable of resolving the vertical structure of turbulence around the entrainment zone. Therefore, utilizing this unique combination of observations, this study focuses on extensively testing the hypotheses that the second-order moments are indeed proportional to the functions which also depend on Richardson number. The coefficients that are used in defining the functions will also be determined observationally and compared against with the values suggested by Large eddy simulation (LES) studies.

  6. Turbulence Variance Characteristics in the Unstable Atmospheric Boundary Layer above Flat Pine Forest

    NASA Astrophysics Data System (ADS)

    Asanuma, Jun

    Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original

  7. Noise-Reduction Benefits Analyzed for Over-the-Wing-Mounted Advanced Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2000-01-01

    As we look to the future, increasingly stringent civilian aviation noise regulations will require the design and manufacture of extremely quiet commercial aircraft. Also, the large fan diameters of modern engines with increasingly higher bypass ratios pose significant packaging and aircraft installation challenges. One design approach that addresses both of these challenges is to mount the engines above the wing. In addition to allowing the performance trend towards large diameters and high bypass ratio cycles to continue, this approach allows the wing to shield much of the engine noise from people on the ground. The Propulsion Systems Analysis Office at the NASA Glenn Research Center at Lewis Field conducted independent analytical research to estimate the noise reduction potential of mounting advanced turbofan engines above the wing. Certification noise predictions were made for a notional long-haul commercial quadjet transport. A large quad was chosen because, even under current regulations, such aircraft sometimes experience difficulty in complying with certification noise requirements with a substantial margin. Also, because of its long wing chords, a large airplane would receive the greatest advantage of any noise-shielding benefit.

  8. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... which the employer has taken to protect the health and safety of workers and adequately show that such... local Job Service office shall send the request to the State office which, in turn, shall forward it to...

  9. The Variance Reaction Time Model

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2004-01-01

    The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…

  10. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    NASA Astrophysics Data System (ADS)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  11. How does variance in fertility change over the demographic transition?

    PubMed Central

    Hruschka, Daniel J.; Burger, Oskar

    2016-01-01

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45–49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. PMID:27022082

  12. Infinite variance in fermion quantum Monte Carlo calculations.

    PubMed

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  13. Hypothesis exploration with visualization of variance

    PubMed Central

    2014-01-01

    Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666

  14. Analysis of Variance: Variably Complex

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…

  15. Female scarcity reduces women's marital ages and increases variance in men's marital ages.

    PubMed

    Kruger, Daniel J; Fitzgerald, Carey J; Peterson, Tom

    2010-08-05

    When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.

  16. 40 CFR 142.301 - What is a small system variance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...

  17. 40 CFR 142.301 - What is a small system variance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...

  18. 40 CFR 142.301 - What is a small system variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...

  19. 40 CFR 142.301 - What is a small system variance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...

  20. 40 CFR 142.301 - What is a small system variance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...

  1. Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.

    PubMed

    Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L

    2005-12-01

    To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.

  2. Advanced supersonic propulsion study. [with emphasis on noise level reduction

    NASA Technical Reports Server (NTRS)

    Sabatella, J. A. (Editor)

    1974-01-01

    A study was conducted to determine the promising propulsion systems for advanced supersonic transport application, and to identify the critical propulsion technology requirements. It is shown that noise constraints have a major effect on the selection of the various engine types and cycle parameters. Several promising advanced propulsion systems were identified which show the potential of achieving lower levels of sideline jet noise than the first generation supersonic transport systems. The non-afterburning turbojet engine, utilizing a very high level of jet suppression, shows the potential to achieve FAR 36 noise level. The duct-heating turbofan with a low level of jet suppression is the most attractive engine for noise levels from FAR 36 to FAR 36 minus 5 EPNdb, and some series/parallel variable cycle engines show the potential of achieving noise levels down to FAR 36 minus 10 EPNdb with moderate additional penalty. The study also shows that an advanced supersonic commercial transport would benefit appreciably from advanced propulsion technology. The critical propulsion technology needed for a viable supersonic propulsion system, and the required specific propulsion technology programs are outlined.

  3. Genetic basis of between-individual and within-individual variance of docility.

    PubMed

    Martin, J G A; Pirotta, E; Petelle, M B; Blumstein, D T

    2017-04-01

    Between-individual variation in phenotypes within a population is the basis of evolution. However, evolutionary and behavioural ecologists have mainly focused on estimating between-individual variance in mean trait and neglected variation in within-individual variance, or predictability of a trait. In fact, an important assumption of mixed-effects models used to estimate between-individual variance in mean traits is that within-individual residual variance (predictability) is identical across individuals. Individual heterogeneity in the predictability of behaviours is a potentially important effect but rarely estimated and accounted for. We used 11 389 measures of docility behaviour from 1576 yellow-bellied marmots (Marmota flaviventris) to estimate between-individual variation in both mean docility and its predictability. We then implemented a double hierarchical animal model to decompose the variances of both mean trait and predictability into their environmental and genetic components. We found that individuals differed both in their docility and in their predictability of docility with a negative phenotypic covariance. We also found significant genetic variance for both mean docility and its predictability but no genetic covariance between the two. This analysis is one of the first to estimate the genetic basis of both mean trait and within-individual variance in a wild population. Our results indicate that equal within-individual variance should not be assumed. We demonstrate the evolutionary importance of the variation in the predictability of docility and illustrate potential bias in models ignoring variation in predictability. We conclude that the variability in the predictability of a trait should not be ignored, and present a coherent approach for its quantification. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  4. Enhancing target variance in personality impressions: highlighting the person in person perception.

    PubMed

    Paulhus, D L; Reynolds, S

    1995-12-01

    D. A. Kenny (1994) estimated the components of personality rating variance to be 15, 20, and 20% for target, rater, and relationship, respectively. To enhance trait variance and minimize rater variance, we designed a series of studies of personality perception in discussion groups (N = 79, 58, and 59). After completing a Big Five questionnaire, participants met 7 times in small groups. After Meetings 1 and 7, group members rated each other. By applying the Social Relations Model (D. A. Kenny and L. La Voie, 1984) to each Big Five dimension at each point in time, we were able to evaluate 6 rating effects as well as rating validity. Among the findings were that (a) target variance was the largest component (almost 30%), whereas rater variance was small (less than 11%); (b) rating validity improved significantly with acquaintance, although target variance did not; and (c) no reciprocity was found, but projection was significant for Agreeableness.

  5. 7 CFR 205.290 - Temporary variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT...

  6. Investigation of the variance and spectral anisotropies of the solar wind turbulence with multiple point spacecraft observations

    NASA Astrophysics Data System (ADS)

    Vech, Daniel; Chen, Christopher

    2016-04-01

    One of the most important features of the plasma turbulence is the anisotropy, which arises due to the presence of the magnetic field. The understanding of the anisotropy is particularly important to reveal how the turbulent cascade operates. It is well known that anisotropy exists with respect to the mean magnetic field, however recent theoretical studies suggested anisotropy with respect to the radial direction. The purpose of this study is to investigate the variance and spectral anisotropies of the solar wind turbulence with multiple point spacecraft observations. The study includes the Advanced Composition Analyzer (ACE), WIND and Cluster spacecraft data. The second order structure functions are derived for two different spacecraft configurations: when the pair of spacecraft are separated radially (with respect to the spacecraft -Sun line) and when they are separated along the transverse direction. We analyze the effect of the different sampling directions on the variance anisotropy, global spectral anisotropy, local 3D spectral anisotropy and discuss the implications for our understanding of solar wind turbulence.

  7. The variance modulation associated with the vestibular evoked myogenic potential.

    PubMed

    Lütkenhöner, Bernd; Rudack, Claudia; Basel, Türker

    2011-07-01

    Model considerations suggest that the sound-induced inhibition underlying the vestibular evoked myogenic potential (VEMP) briefly reduces the variance of the electromyogram (EMG) from which the VEMP is derived. Although more difficult to investigate, this inhibitory modulation of the variance promises to be a specific measure of the inhibition, in that respect being superior to the VEMP itself. This study aimed to verify the theoretical predictions. Archived data from 672 clinical VEMP investigations, comprising about 300,000 EMG records altogether, were pooled. Both the complete data pool and subsets of data representing VEMPs of varying degrees of distinctness were analyzed. The data were generally normalized so that the EMG had variance one. Regarding VEMP deflection p13, the data confirm the theoretical predictions. At the latency of deflection n23, however, an additional excitatory component, showing a maximal effect around 30 ms, appears to contribute. Studying the variance modulation may help to identify and characterize different components of the VEMP. In particular, it appears to be possible to distinguish between inhibition and excitation. The variance modulation provides information not being available in the VEMP itself. Thus, studying this measure may significantly contribute to our understanding of the VEMP phenomenon. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Hydrograph variances over different timescales in hydropower production networks

    NASA Astrophysics Data System (ADS)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of <1 week, depending on the Peclet number (Pe) of the stream reach. This implies that flow variance becomes more erratic (closer to white noise) as a result of current production objectives.

  9. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    PubMed

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  10. Using variance structure to quantify responses to perturbation in fish catches

    USGS Publications Warehouse

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  11. Variance partitioning of stream diatom, fish, and invertebrate indicators of biological condition

    USGS Publications Warehouse

    Zuellig, Robert E.; Carlisle, Daren M.; Meador, Michael R.; Potapova, Marina

    2012-01-01

    Stream indicators used to make assessments of biological condition are influenced by many possible sources of variability. To examine this issue, we used multiple-year and multiple-reach diatom, fish, and invertebrate data collected from 20 least-disturbed and 46 developed stream segments between 1993 and 2004 as part of the US Geological Survey National Water Quality Assessment Program. We used a variance-component model to summarize the relative and absolute magnitude of 4 variance components (among-site, among-year, site × year interaction, and residual) in indicator values (observed/expected ratio [O/E] and regional multimetric indices [MMI]) among assemblages and between basin types (least-disturbed and developed). We used multiple-reach samples to evaluate discordance in site assessments of biological condition caused by sampling variability. Overall, patterns in variance partitioning were similar among assemblages and basin types with one exception. Among-site variance dominated the relative contribution to the total variance (64–80% of total variance), residual variance (sampling variance) accounted for more variability (8–26%) than interaction variance (5–12%), and among-year variance was always negligible (0–0.2%). The exception to this general pattern was for invertebrates at least-disturbed sites where variability in O/E indicators was partitioned between among-site and residual (sampling) variance (among-site  =  36%, residual  =  64%). This pattern was not observed for fish and diatom indicators (O/E and regional MMI). We suspect that unexplained sampling variability is what largely remained after the invertebrate indicators (O/E predictive models) had accounted for environmental differences among least-disturbed sites. The influence of sampling variability on discordance of within-site assessments was assemblage or basin-type specific. Discordance among assessments was nearly 2× greater in developed basins (29–31%) than in least

  12. Prediction-error variance in Bayesian model updating: a comparative study

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  13. TU-H-CAMPUS-IeP1-01: Bias and Computational Efficiency of Variance Reduction Methods for the Monte Carlo Simulation of Imaging Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, D; Badano, A; Sempau, J

    Purpose: Variance reduction techniques (VRTs) are employed in Monte Carlo simulations to obtain estimates with reduced statistical uncertainty for a given simulation time. In this work, we study the bias and efficiency of a VRT for estimating the response of imaging detectors. Methods: We implemented Directed Sampling (DS), preferentially directing a fraction of emitted optical photons directly towards the detector by altering the isotropic model. The weight of each optical photon is appropriately modified to maintain simulation estimates unbiased. We use a Monte Carlo tool called fastDETECT2 (part of the hybridMANTIS open-source package) for optical transport, modified for VRT. Themore » weight of each photon is calculated as the ratio of original probability (no VRT) and the new probability for a particular direction. For our analysis of bias and efficiency, we use pulse height spectra, point response functions, and Swank factors. We obtain results for a variety of cases including analog (no VRT, isotropic distribution), and DS with 0.2 and 0.8 optical photons directed towards the sensor plane. We used 10,000, 25-keV primaries. Results: The Swank factor for all cases in our simplified model converged fast (within the first 100 primaries) to a stable value of 0.9. The root mean square error per pixel for DS VRT for the point response function between analog and VRT cases was approximately 5e-4. Conclusion: Our preliminary results suggest that DS VRT does not affect the estimate of the mean for the Swank factor. Our findings indicate that it may be possible to design VRTs for imaging detector simulations to increase computational efficiency without introducing bias.« less

  14. Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor

    NASA Technical Reports Server (NTRS)

    Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)

    1980-01-01

    The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.

  15. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    PubMed

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in

  16. 29 CFR 1905.11 - Variances and other relief under section 6(d).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Variances and other relief under section 6(d). 1905.11... section 6(d). (a) Application for variance. Any employer, or class of employers, desiring a variance authorized by section 6(d) of the Act may file a written application containing the information specified in...

  17. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  18. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel

  19. Allowing variance may enlarge the safe operating space for exploited ecosystems.

    PubMed

    Carpenter, Stephen R; Brock, William A; Folke, Carl; van Nes, Egbert H; Scheffer, Marten

    2015-11-17

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2-4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them.

  20. Allowing variance may enlarge the safe operating space for exploited ecosystems

    PubMed Central

    Carpenter, Stephen R.; Brock, William A.; Folke, Carl; van Nes, Egbert H.; Scheffer, Marten

    2015-01-01

    Variable flows of food, water, or other ecosystem services complicate planning. Management strategies that decrease variability and increase predictability may therefore be preferred. However, actions to decrease variance over short timescales (2–4 y), when applied continuously, may lead to long-term ecosystem changes with adverse consequences. We investigated the effects of managing short-term variance in three well-understood models of ecosystem services: lake eutrophication, harvest of a wild population, and yield of domestic herbivores on a rangeland. In all cases, actions to decrease variance can increase the risk of crossing critical ecosystem thresholds, resulting in less desirable ecosystem states. Managing to decrease short-term variance creates ecosystem fragility by changing the boundaries of safe operating spaces, suppressing information needed for adaptive management, cancelling signals of declining resilience, and removing pressures that may build tolerance of stress. Thus, the management of variance interacts strongly and inseparably with the management of resilience. By allowing for variation, learning, and flexibility while observing change, managers can detect opportunities and problems as they develop while sustaining the capacity to deal with them. PMID:26438857

  1. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  2. Estimating integrated variance in the presence of microstructure noise using linear regression

    NASA Astrophysics Data System (ADS)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  3. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    PubMed

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  4. Boundary Conditions for Scalar (Co)Variances over Heterogeneous Surfaces

    NASA Astrophysics Data System (ADS)

    Machulskaya, Ekaterina; Mironov, Dmitrii

    2018-05-01

    The problem of boundary conditions for the variances and covariances of scalar quantities (e.g., temperature and humidity) at the underlying surface is considered. If the surface is treated as horizontally homogeneous, Monin-Obukhov similarity suggests the Neumann boundary conditions that set the surface fluxes of scalar variances and covariances to zero. Over heterogeneous surfaces, these boundary conditions are not a viable choice since the spatial variability of various surface and soil characteristics, such as the ground fluxes of heat and moisture and the surface radiation balance, is not accounted for. Boundary conditions are developed that are consistent with the tile approach used to compute scalar (and momentum) fluxes over heterogeneous surfaces. To this end, the third-order transport terms (fluxes of variances) are examined analytically using a triple decomposition of fluctuating velocity and scalars into the grid-box mean, the fluctuation of tile-mean quantity about the grid-box mean, and the sub-tile fluctuation. The effect of the proposed boundary conditions on mixing in an archetypical stably-stratified boundary layer is illustrated with a single-column numerical experiment. The proposed boundary conditions should be applied in atmospheric models that utilize turbulence parametrization schemes with transport equations for scalar variances and covariances including the third-order turbulent transport (diffusion) terms.

  5. Variance in prey abundance influences time budgets of breeding seabirds: Evidence from pigeon guillemots Cepphus columba

    USGS Publications Warehouse

    Litzow, Michael A.; Piatt, John F.

    2003-01-01

    We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.

  6. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    PubMed

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  7. Three Averaging Techniques for Reduction of Antenna Temperature Variance Measured by a Dicke Mode, C-Band Radiometer

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Lawrence, Roland W.

    2000-01-01

    As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.

  8. Reduction of antibiotic resistance genes in municipal wastewater effluent by advanced oxidation processes.

    PubMed

    Zhang, Yingying; Zhuang, Yao; Geng, Jinju; Ren, Hongqiang; Xu, Ke; Ding, Lili

    2016-04-15

    This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe(2+)/H2O2) and UV/H2O2 process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H2O2 process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H2O2 method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe(2+)/H2O2 had a molar ratio of 0.1 and a H2O2 concentration of 0.01molL(-1) with a pH of 3.0 and reaction time of 2h, 2.58-3.79 logs of target genes were removed. Under the initial effluent pH condition (pH=7.0), the removal was 2.26-3.35 logs. For the UV/H2O2 process, when the pH was 3.5 with a H2O2 concentration of 0.01molL(-1) accompanied by 30min of UV irradiation, all ARGs could achieve a reduction of 2.8-3.5 logs, and 1.55-2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H2O2 process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe(2+)/H2O2 molar ratios, H2O2 concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Testing Interaction Effects without Discarding Variance.

    ERIC Educational Resources Information Center

    Lopez, Kay A.

    Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…

  10. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    NASA Astrophysics Data System (ADS)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  11. Variances and uncertainties of the sample laboratory-to-laboratory variance (S(L)2) and standard deviation (S(L)) associated with an interlaboratory study.

    PubMed

    McClure, Foster D; Lee, Jung K

    2012-01-01

    The validation process for an analytical method usually employs an interlaboratory study conducted as a balanced completely randomized model involving a specified number of randomly chosen laboratories, each analyzing a specified number of randomly allocated replicates. For such studies, formulas to obtain approximate unbiased estimates of the variance and uncertainty of the sample laboratory-to-laboratory (lab-to-lab) STD (S(L)) have been developed primarily to account for the uncertainty of S(L) when there is a need to develop an uncertainty budget that includes the uncertainty of S(L). For the sake of completeness on this topic, formulas to estimate the variance and uncertainty of the sample lab-to-lab variance (S(L)2) were also developed. In some cases, it was necessary to derive the formulas based on an approximate distribution for S(L)2.

  12. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  13. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    PubMed

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  14. Origin and Consequences of the Relationship between Protein Mean and Variance

    PubMed Central

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome. PMID:25062021

  15. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    PubMed

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  16. Reductions in Perceived Injustice are Associated With Reductions in Disability and Depressive Symptoms After Total Knee Arthroplasty.

    PubMed

    Yakobov, Esther; Scott, Whitney; Stanish, William D; Tanzer, Michael; Dunbar, Michael; Richardson, Glen; Sullivan, Michael J L

    2018-05-01

    Perceptions of injustice have been associated with problematic recovery outcomes in individuals with a wide range of debilitating pain conditions. It has been suggested that, in patients with chronic pain, perceptions of injustice might arise in response to experiences characterized by illness-related pain severity, depressive symptoms, and disability. If symptoms severity and disability are important contributors to perceived injustice (PI), it follows that interventions that yield reductions in symptom severity and disability should also contribute to reductions in perceptions of injustice. The present study examined the relative contributions of postsurgical reductions in pain severity, depressive symptoms, and disability to the prediction of reductions in perceptions of injustice. The study sample consisted of 110 individuals (69 women and 41 men) with osteoarthritis of the knee scheduled for total knee arthroplasty (TKA). Patients completed measures of perceived injustice, depressive symptoms, pain, and disability at their presurgical evaluation, and at 1-year follow-up. The results revealed that reductions in depressive symptoms and disability, but not pain severity, were correlated with reductions in perceived injustice. Regression analyses revealed that reductions in disability and reductions in depressive symptoms contributed modest but significant unique variance to the prediction of postsurgical reductions in perceived injustice. The present findings are consistent with current conceptualizations of injustice appraisals that propose a central role for symptom severity and disability as determinants of perceptions of injustice in patients with persistent pain. The results suggest that the inclusion of psychosocial interventions that target depressive symptoms and perceived injustice might augment the impact of rehabilitation programs made available for individuals recovering from TKA.

  17. Impact of Damping Uncertainty on SEA Model Response Variance

    NASA Technical Reports Server (NTRS)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  18. Gravity Wave Variances and Propagation Derived from AIRS Radiances

    NASA Technical Reports Server (NTRS)

    Gong, Jie; Wu, Dong L.; Eckermann, S. D.

    2012-01-01

    As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).

  19. Thermospheric mass density model error variance as a function of time scale

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  20. Fan Noise Reduction: An Overview

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2001-01-01

    Fan noise reduction technologies developed as part of the engine noise reduction element of the Advanced Subsonic Technology Program are reviewed. Developments in low-noise fan stage design, swept and leaned outlet guide vanes, active noise control, fan flow management, and scarfed inlet are discussed. In each case, a description of the method is presented and, where available, representative results and general conclusions are discussed. The review concludes with a summary of the accomplishments of the AST-sponsored fan noise reduction research and a few thoughts on future work.

  1. Gini estimation under infinite variance

    NASA Astrophysics Data System (ADS)

    Fontanari, Andrea; Taleb, Nassim Nicholas; Cirillo, Pasquale

    2018-07-01

    We study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α ∈(1 , 2)). We show that, in such a case, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods, because of a downward bias that emerges under fat tails. This has important implications for the ongoing discussion about economic inequality. We start by discussing how the nonparametric estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution, as the data distribution shifts from the domain of attraction of a light-tailed distribution to that of a fat-tailed one, especially in the case of infinite variance. We also show how the nonparametric Gini bias increases with lower values of α. We then prove that maximum likelihood estimation outperforms nonparametric methods, requiring a much smaller sample size to reach efficiency. Finally, for fat-tailed data, we provide a simple correction mechanism to the small sample bias of the nonparametric estimator based on the distance between the mode and the mean of its asymptotic distribution.

  2. Fermentation and Hydrogen Metabolism Affect Uranium Reduction by Clostridia

    DOE PAGES

    Gao, Weimin; Francis, Arokiasamy J.

    2013-01-01

    Previously, it has been shown that not only is uranium reduction under fermentation condition common among clostridia species, but also the strains differed in the extent of their capability and the pH of the culture significantly affected uranium(VI) reduction. In this study, using HPLC and GC techniques, metabolic properties of those clostridial strains active in uranium reduction under fermentation conditions have been characterized and their effects on capability variance of uranium reduction discussed. Then, the relationship between hydrogen metabolism and uranium reduction has been further explored and the important role played by hydrogenase in uranium(VI) and iron(III) reduction by clostridiamore » demonstrated. When hydrogen was provided as the headspace gas, uranium(VI) reduction occurred in the presence of whole cells of clostridia. This is in contrast to that of nitrogen as the headspace gas. Without clostridia cells, hydrogen alone could not result in uranium(VI) reduction. In alignment with this observation, it was also found that either copper(II) addition or iron depletion in the medium could compromise uranium reduction by clostridia. In the end, a comprehensive model was proposed to explain uranium reduction by clostridia and its relationship to the overall metabolism especially hydrogen (H 2 ) production.« less

  3. How the Weak Variance of Momentum Can Turn Out to be Negative

    NASA Astrophysics Data System (ADS)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  4. Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models

    NASA Technical Reports Server (NTRS)

    Yoder, Dennis A.

    2016-01-01

    In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.

  5. Speckle measuring instrument based on biological characteristics of the human eyes and speckle reduction with advanced electromagnetic micro-scanning mirror

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Fang, Tao; Sun, Min Yuan; Gao, Wei Nan; Zhang, Shuo; Bi, Yong

    2018-07-01

    Laser speckle is a major issue for laser projection displays. In various techniques of speckle reduction, speckle is quantified with a speckle contrast value. However, the measured speckle contrast is poorly suited for the subjective speckle perception of a human observer. Here, we investigate the characteristics of human eyes and propose a simplified optical transfer function of human eyes. Accordingly, two human-eye-modeled speckle measuring sets are configured. Based on the experimental set, an advanced electromagnetic micro-scanning mirror (EM-MSM) is exploited; which is of 6.5 mm in diameter and its half angle is 7.8° for a horizontal scan and 6.53° for a vertical scan. Finally, we quantitatively show that images generated with an EM-MSM exhibit superior quality. By providing human-eye-modeled speckle measuring instruments and an EM-MSM for speckle reduction, it has a promising promotion to laser projector development.

  6. Age-related variance in decisions under ambiguity is explained by changes in reasoning, executive functions, and decision-making under risk.

    PubMed

    Schiebener, Johannes; Brand, Matthias

    2017-06-01

    Previous literature has explained older individuals' disadvantageous decision-making under ambiguity in the Iowa Gambling Task (IGT) by reduced emotional warning signals preceding decisions. We argue that age-related reductions in IGT performance may also be explained by reductions in certain cognitive abilities (reasoning, executive functions). In 210 participants (18-86 years), we found that the age-related variance on IGT performance occurred only in the last 60 trials. The effect was mediated by cognitive abilities and their relation with decision-making performance under risk with explicit rules (Game of Dice Task). Thus, reductions in cognitive functions in older age may be associated with both a reduced ability to gain explicit insight into the rules of the ambiguous decision situation and with failure to choose the less risky options consequently after the rules have been understood explicitly. Previous literature may have underestimated the relevance of cognitive functions for age-related decline in decision-making performance under ambiguity.

  7. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    DTIC Science & Technology

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  8. Semiautomated analysis of embryoscope images: Using localized variance of image intensity to detect embryo developmental stages.

    PubMed

    Mölder, Anna; Drury, Sarah; Costen, Nicholas; Hartshorne, Geraldine M; Czanner, Silvester

    2015-02-01

    Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry. © 2015 International

  9. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series

    PubMed Central

    Fransson, Peter

    2016-01-01

    Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176

  10. A class of multi-period semi-variance portfolio for petroleum exploration and development

    NASA Astrophysics Data System (ADS)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  11. Generalized Variance Function Applications in Forestry

    Treesearch

    James Alegria; Charles T. Scott; Charles T. Scott

    1991-01-01

    Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...

  12. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  13. Prospects for discovering pulsars in future continuum surveys using variance imaging

    NASA Astrophysics Data System (ADS)

    Dai, S.; Johnston, S.; Hobbs, G.

    2017-12-01

    In our previous paper, we developed a formalism for computing variance images from standard, interferometric radio images containing time and frequency information. Variance imaging with future radio continuum surveys allows us to identify radio pulsars and serves as a complement to conventional pulsar searches that are most sensitive to strictly periodic signals. Here, we carry out simulations to predict the number of pulsars that we can uncover with variance imaging in future continuum surveys. We show that the Australian SKA Pathfinder (ASKAP) Evolutionary Map of the Universe (EMU) survey can find ∼30 normal pulsars and ∼40 millisecond pulsars (MSPs) over and above the number known today, and similarly an all-sky continuum survey with SKA-MID can discover ∼140 normal pulsars and ∼110 MSPs with this technique. Variance imaging with EMU and SKA-MID will detect pulsars with large duty cycles and is therefore a potential tool for finding MSPs and pulsars in relativistic binary systems. Compared with current pulsar surveys at high Galactic latitudes in the Southern hemisphere, variance imaging with EMU and SKA-MID will be more sensitive, and will enable detection of pulsars with dispersion measures between ∼10 and 100 cm-3 pc.

  14. 32 CFR 165.7 - Waivers (including reductions).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RECOUPMENT OF NONRECURRING COSTS ON SALES OF U.S. ITEMS § 165.7 Waivers (including reductions). (a) The “Arms... of nonrecurring cost of major defense equipment from foreign military sales customers but authorizes consideration of reductions or waivers for particular sales which, if made, significantly advance U.S...

  15. 32 CFR 165.7 - Waivers (including reductions).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RECOUPMENT OF NONRECURRING COSTS ON SALES OF U.S. ITEMS § 165.7 Waivers (including reductions). (a) The “Arms... of nonrecurring cost of major defense equipment from foreign military sales customers but authorizes consideration of reductions or waivers for particular sales which, if made, significantly advance U.S...

  16. 32 CFR 165.7 - Waivers (including reductions).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RECOUPMENT OF NONRECURRING COSTS ON SALES OF U.S. ITEMS § 165.7 Waivers (including reductions). (a) The “Arms... of nonrecurring cost of major defense equipment from foreign military sales customers but authorizes consideration of reductions or waivers for particular sales which, if made, significantly advance U.S...

  17. 42 CFR 488.64 - Remote facility variances for utilization review requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 5 2014-10-01 2014-10-01 false Remote facility variances for utilization review requirements. 488.64 Section 488.64 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a...

  18. 42 CFR 488.64 - Remote facility variances for utilization review requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 5 2012-10-01 2012-10-01 false Remote facility variances for utilization review requirements. 488.64 Section 488.64 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a...

  19. 42 CFR 488.64 - Remote facility variances for utilization review requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 5 2011-10-01 2011-10-01 false Remote facility variances for utilization review requirements. 488.64 Section 488.64 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a...

  20. 42 CFR 488.64 - Remote facility variances for utilization review requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 5 2013-10-01 2013-10-01 false Remote facility variances for utilization review requirements. 488.64 Section 488.64 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a...

  1. Estimation of within-stratum variance for sample allocation: Foreign commodity production forecasting

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)

    1980-01-01

    The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.

  2. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment

  3. Assessment of the Noise Reduction Potential of Advanced Subsonic Transport Concepts for NASA's Environmentally Responsible Aviation Project

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Burley, Casey L.; Nickol, Craig L.

    2016-01-01

    Aircraft system noise is predicted for a portfolio of NASA advanced concepts with 2025 entry-into-service technology assumptions. The subsonic transport concepts include tube-and-wing configurations with engines mounted under the wing, over the wing nacelle integration, and a double deck fuselage with engines at a mid-fuselage location. Also included are hybrid wing body aircraft with engines upstream of the fuselage trailing edge. Both advanced direct drive engines and geared turbofan engines are modeled. Recent acoustic experimental information was utilized in the prediction for several key technologies. The 301-passenger class hybrid wing body with geared ultra high bypass engines is assessed at 40.3 EPNLdB cumulative below the Stage 4 certification level. Other hybrid wing body and unconventional tube-and-wing configurations reach levels of 33 EPNLdB or more below the certification level. Many factors contribute to the system level result; however, the hybrid wing body in the 301-passenger class, as compared to a tubeand- wing with conventional engine under wing installation, has 11.9 EPNLdB of noise reduction due to replacing reflection with acoustic shielding of engine noise sources. Therefore, the propulsion airframe aeroacoustic interaction effects clearly differentiate the unconventional configurations that approach levels close to or exceed the 42 EPNLdB goal.

  4. Comparison of noise reduction systems

    NASA Astrophysics Data System (ADS)

    Noel, S. D.; Whitaker, R. W.

    1991-06-01

    When using infrasound as a tool for verification, the most important measurement to determine yield has been the peak-to-peak pressure amplitude of the signal. Therefore, there is a need to operate at the most favorable signal-to-noise ratio (SNR) possible. Winds near the ground can degrade the SNR, thereby making accurate signal amplitude measurement difficult. Wind noise reduction techniques were developed to help alleviate this problem; however, a noise reducing system should reduce the noise, and should not introduce distortion of coherent signals. An experiment is described to study system response for a variety of noise reducing configurations to a signal generated by an underground test (UGT) at the Nevada Test Site (NTS). In addition to the signal, background noise reduction is examined through measurements of variance. Sensors using two particular geometries of noise reducing equipment, the spider and the cross appear to deliver the best SNR. Because the spider configuration is easier to deploy, it is now the most commonly used.

  5. Multi-objective Optimization of Solar Irradiance and Variance at Pertinent Inclination Angles

    NASA Astrophysics Data System (ADS)

    Jain, Dhanesh; Lalwani, Mahendra

    2018-05-01

    The performance of photovoltaic panel gets highly affected bychange in atmospheric conditions and angle of inclination. This article evaluates the optimum tilt angle and orientation angle (surface azimuth angle) for solar photovoltaic array in order to get maximum solar irradiance and to reduce variance of radiation at different sets or subsets of time periods. Non-linear regression and adaptive neural fuzzy interference system (ANFIS) methods are used for predicting the solar radiation. The results of ANFIS are more accurate in comparison to non-linear regression. These results are further used for evaluating the correlation and applied for estimating the optimum combination of tilt angle and orientation angle with the help of general algebraic modelling system and multi-objective genetic algorithm. The hourly average solar irradiation is calculated at different combinations of tilt angle and orientation angle with the help of horizontal surface radiation data of Jodhpur (Rajasthan, India). The hourly average solar irradiance is calculated for three cases: zero variance, with actual variance and with double variance at different time scenarios. It is concluded that monthly collected solar radiation produces better result as compared to bimonthly, seasonally, half-yearly and yearly collected solar radiation. The profit obtained for monthly varying angle has 4.6% more with zero variance and 3.8% more with actual variance, than the annually fixed angle.

  6. Defining ulnar variance in the adolescent wrist: measurement technique and interobserver reliability.

    PubMed

    Goldfarb, Charles A; Strauss, Nicole L; Wall, Lindley B; Calfee, Ryan P

    2011-02-01

    The measurement technique for ulnar variance in the adolescent population has not been well established. The purpose of this study was to assess the reliability of a standard ulnar variance assessment in the adolescent population. Four orthopedic surgeons measured 138 adolescent wrist radiographs for ulnar variance using a standard technique. There were 62 male and 76 female radiographs obtained in a standardized fashion for subjects aged 12 to 18 years. Skeletal age was used for analysis. We determined mean variance and assessed for differences related to age and gender. We also determined the interrater reliability. The mean variance was -0.7 mm for boys and -0.4 mm for girls; there was no significant difference between the 2 groups overall. When subdivided by age and gender, the younger group (≤ 15 y of age) was significantly less negative for girls (boys, -0.8 mm and girls, -0.3 mm, p < .05). There was no significant difference between boys and girls in the older group. The greatest difference between any 2 raters was 1 mm; exact agreement was obtained in 72 subjects. Correlations between raters were high (r(p) 0.87-0.97 in boys and 0.82-0.96 for girls). Interrater reliability was excellent (Cronbach's alpha, 0.97-0.98). Standard assessment techniques for ulnar variance are reliable in the adolescent population. Open growth plates did not interfere with this assessment. Young adolescent boys demonstrated a greater degree of negative ulnar variance compared with young adolescent girls. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  7. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  8. 42 CFR 488.64 - Remote facility variances for utilization review requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Remote facility variances for utilization review... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a... such facility or direct responsibility for the care of the patients being reviewed or, in the case of a...

  9. Advanced Rotorcraft Transmission (ART) program

    NASA Technical Reports Server (NTRS)

    Heath, Gregory F.; Bossler, Robert B., Jr.

    1993-01-01

    Work performed by the McDonnell Douglas Helicopter Company and Lucas Western, Inc. within the U.S. Army/NASA Advanced Rotorcraft Transmission (ART) Program is summarized. The design of a 5000 horsepower transmission for a next generation advanced attack helicopter is described. Government goals for the program were to define technology and detail design the ART to meet, as a minimum, a weight reduction of 25 percent, an internal noise reduction of 10 dB plus a mean-time-between-removal (MTBR) of 5000 hours compared to a state-of-the-art baseline transmission. The split-torque transmission developed using face gears achieved a 40 percent weight reduction, a 9.6 dB noise reduction and a 5270 hour MTBR in meeting or exceeding the above goals. Aircraft mission performance and cost improvements resulting from installation of the ART would include a 17 to 22 percent improvement in loss-exchange ratio during combat, a 22 percent improvement in mean-time-between-failure, a transmission acquisition cost savings of 23 percent of $165K, per unit, and an average transmission direct operating cost savings of 33 percent, or $24K per flight hour. Face gear tests performed successfully at NASA Lewis are summarized. Also, program results of advanced material tooth scoring tests, single tooth bending tests, Charpy impact energy tests, compact tension fracture toughness tests and tensile strength tests are summarized.

  10. Daily Goals Formulation and Enhanced Visualization of Mechanical Ventilation Variance Improves Mechanical Ventilation Score.

    PubMed

    Walsh, Brian K; Smallwood, Craig; Rettig, Jordan; Kacmarek, Robert M; Thompson, John; Arnold, John H

    2017-03-01

    The systematic implementation of evidence-based practice through the use of guidelines, checklists, and protocols mitigates the risks associated with mechanical ventilation, yet variation in practice remains prevalent. Recent advances in software and hardware have allowed for the development and deployment of an enhanced visualization tool that identifies mechanical ventilation goal variance. Our aim was to assess the utility of daily goal establishment and a computer-aided visualization of variance. This study was composed of 3 phases: a retrospective observational phase (baseline) followed by 2 prospective sequential interventions. Phase I intervention comprised daily goal establishment of mechanical ventilation. Phase II intervention was the setting and monitoring of daily goals of mechanical ventilation with a web-based data visualization system (T3). A single score of mechanical ventilation was developed to evaluate the outcome. The baseline phase evaluated 130 subjects, phase I enrolled 31 subjects, and phase II enrolled 36 subjects. There were no differences in demographic characteristics between cohorts. A total of 171 verbalizations of goals of mechanical ventilation were completed in phase I. The use of T3 increased by 87% from phase I. Mechanical ventilation score improved by 8.4% in phase I and 11.3% in phase II from baseline ( P = .032). The largest effect was in the low risk V T category, with a 40.3% improvement from baseline in phase I, which was maintained at 39% improvement from baseline in phase II ( P = .01). mechanical ventilation score was 9% higher on average in those who survived. Daily goal formation and computer-enhanced visualization of mechanical ventilation variance were associated with an improvement in goal attainment by evidence of an improved mechanical ventilation score. Further research is needed to determine whether improvements in mechanical ventilation score through a targeted, process-oriented intervention will lead to

  11. Propulsion Noise Reduction Research in the NASA Advanced Air Transport Technology Project

    NASA Technical Reports Server (NTRS)

    Van Zante, Dale; Nark, Douglas; Fernandez, Hamilton

    2017-01-01

    The Aircraft Noise Reduction (ANR) sub-project is focused on the generation, development, and testing of component noise reduction technologies progressing toward the NASA far term noise goals while providing associated near and mid-term benefits. The ANR sub-project has efforts in airframe noise reduction, propulsion (including fan and core) noise reduction, acoustic liner technology, and propulsion airframe aeroacoustics for candidate conventional and unconventional aircraft configurations. The current suite of propulsion specific noise research areas is reviewed along with emerging facility and measurement capabilities. In the longer term, the changes in engine and aircraft configuration will influence the suite of technologies necessary to reduce noise in next generation systems.

  12. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    PubMed Central

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  13. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... granted pursuant to this part shall have only future effect. In his discretion, the Assistant Secretary... RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  14. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... granted pursuant to this part shall have only future effect. In his discretion, the Assistant Secretary... RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  15. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  16. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  17. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  18. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  19. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was adopted by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  20. Decomposing genomic variance using information from GWA, GWE and eQTL analysis.

    PubMed

    Ehsani, A; Janss, L; Pomp, D; Sørensen, P

    2016-04-01

    A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance. © 2015 Stichting International Foundation for Animal Genetics.

  1. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    PubMed

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  2. Environmental stress, inbreeding, and the nature of phenotypic and genetic variance in Drosophila melanogaster.

    PubMed Central

    Fowler, Kevin; Whitlock, Michael C

    2002-01-01

    Fifty-two lines of Drosophila melanogaster founded by single-pair population bottlenecks were used to study the effects of inbreeding and environmental stress on phenotypic variance, genetic variance and survivorship. Cold temperature and high density cause reduced survivorship, but these stresses do not cause repeatable changes in the phenotypic variance of most wing morphological traits. Wing area, however, does show increased phenotypic variance under both types of environmental stress. This increase is no greater in inbred than in outbred lines, showing that inbreeding does not increase the developmental effects of stress. Conversely, environmental stress does not increase the extent of inbreeding depression. Genetic variance is not correlated with environmental stress, although the amount of genetic variation varies significantly among environments and lines vary significantly in their response to environmental change. Drastic changes in the environment can cause changes in phenotypic and genetic variance, but not in a way reliably predicted by the notion of 'stress'. PMID:11934358

  3. Recent Progress in Engine Noise Reduction Technologies

    NASA Technical Reports Server (NTRS)

    Huff, Dennis; Gliebe, Philip

    2003-01-01

    Highlights from NASA-funded research over the past ten years for aircraft engine noise reduction are presented showing overall technical plans, accomplishments, and selected applications to turbofan engines. The work was sponsored by NASA's Advanced Subsonic Technology (AST) Noise Reduction Program. Emphasis is given to only the engine noise reduction research and significant accomplishments that were investigated at Technology Readiness Levels ranging from 4 to 6. The Engine Noise Reduction sub-element was divided into four work areas: source noise prediction, model scale tests, engine validation, and active noise control. Highlights from each area include technologies for higher bypass ratio turbofans, scarf inlets, forward-swept fans, swept and leaned stators, chevron/tabbed nozzles, advanced noise prediction analyses, and active noise control for fans. Finally, an industry perspective is given from General Electric Aircraft Engines showing how these technologies are being applied to commercial products. This publication contains only presentation vu-graphs from an invited lecture given at the 41st AIAA Aerospace Sciences Meeting, January 6-9, 2003.

  4. Studying Variance in the Galactic Ultra-compact Binary Population

    NASA Astrophysics Data System (ADS)

    Larson, Shane L.; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  5. Technologies for Turbofan Noise Reduction

    NASA Technical Reports Server (NTRS)

    Huff, Dennis

    2005-01-01

    An overview presentation of NASA's engine noise research since 1992 is given for subsonic commercial aircraft applications. Highlights are included from the Advanced Subsonic Technology (AST) Noise Reduction Program and the Quiet Aircraft Technology (QAT) project with emphasis on engine source noise reduction. Noise reduction goals for 10 EPNdB by 207 and 20 EPNdB by 2022 are reviewed. Fan and jet noise technologies are highlighted from the AST program including higher bypass ratio propulsion, scarf inlets, forward-swept fans, swept/leaned stators, chevron nozzles, noise prediction methods, and active noise control for fans. Source diagnostic tests for fans and jets that have been completed over the past few years are presented showing how new flow measurement methods such as Particle Image Velocimetry (PIV) have played a key role in understanding turbulence, the noise generation process, and how to improve noise prediction methods. Tests focused on source decomposition have helped identify which engine components need further noise reduction. The role of Computational AeroAcoustics (CAA) for fan noise prediction is presented. Advanced noise reduction methods such as Hershel-Quincke tubes and trailing edge blowing for fan noise that are currently being pursued n the QAT program are also presented. Highlights are shown form engine validation and flight demonstrations that were done in the late 1990's with Pratt & Whitney on their PW4098 engine and Honeywell on their TFE-731-60 engine. Finally, future propulsion configurations currently being studied that show promise towards meeting NASA's long term goal of 20 dB noise reduction are shown including a Dual Fan Engine concept on a Blended Wing Body aircraft.

  6. POLLUTION PREVENTION RESEARCH ONGOING - EPA'S RISK REDUCTION ENGINEERING LABORATORY

    EPA Science Inventory

    The mission of the Risk Reduction Engineering Laboratory is to advance the understanding, development and application of engineering solutions for the prevention or reduction of risks from environmental contamination. This mission is accomplished through basic and applied researc...

  7. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  8. Genetic and environmental variance in content dimensions of the MMPI.

    PubMed

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  9. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    PubMed

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  10. Structure analysis of simulated molecular clouds with the Δ-variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less

  11. Structure analysis of simulated molecular clouds with the Δ-variance

    DOE PAGES

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    2015-05-27

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less

  12. Amplitude Reduction and Phase Shifts of Melatonin, Cortisol and Other Circadian Rhythms after a Gradual Advance of Sleep and Light Exposure in Humans

    PubMed Central

    Dijk, Derk-Jan; Duffy, Jeanne F.; Silva, Edward J.; Shanahan, Theresa L.; Boivin, Diane B.; Czeisler, Charles A.

    2012-01-01

    Background The phase and amplitude of rhythms in physiology and behavior are generated by circadian oscillators and entrained to the 24-h day by exposure to the light-dark cycle and feedback from the sleep-wake cycle. The extent to which the phase and amplitude of multiple rhythms are similarly affected during altered timing of light exposure and the sleep-wake cycle has not been fully characterized. Methodology/Principal Findings We assessed the phase and amplitude of the rhythms of melatonin, core body temperature, cortisol, alertness, performance and sleep after a perturbation of entrainment by a gradual advance of the sleep-wake schedule (10 h in 5 days) and associated light-dark cycle in 14 healthy men. The light-dark cycle consisted either of moderate intensity ‘room’ light (∼90–150 lux) or moderate light supplemented with bright light (∼10,000 lux) for 5 to 8 hours following sleep. After the advance of the sleep-wake schedule in moderate light, no significant advance of the melatonin rhythm was observed whereas, after bright light supplementation the phase advance was 8.1 h (SEM 0.7 h). Individual differences in phase shifts correlated across variables. The amplitude of the melatonin rhythm assessed under constant conditions was reduced after moderate light by 54% (17–94%) and after bright light by 52% (range 12–84%), as compared to the amplitude at baseline in the presence of a sleep-wake cycle. Individual differences in amplitude reduction of the melatonin rhythm correlated with the amplitude of body temperature, cortisol and alertness. Conclusions/Significance Alterations in the timing of the sleep-wake cycle and associated bright or moderate light exposure can lead to changes in phase and reduction of circadian amplitude which are consistent across multiple variables but differ between individuals. These data have implications for our understanding of circadian organization and the negative health outcomes associated with shift-work, jet

  13. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Public design report (preliminary and final)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-07-01

    This Public Design Report presents the design criteria of a DOE Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of NO{sub x} emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 (500 MW) near Rome, Georgia. The technologies being demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NO{sub x} burner. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NO{submore » x} burners, advanced overfire systems, and digital control system.« less

  14. 42 CFR 456.524 - Notification of Administrator's action and duration of variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.524 Notification of Administrator's action and duration of...

  15. Variance approximations for assessments of classification accuracy

    Treesearch

    R. L. Czaplewski

    1994-01-01

    Variance approximations are derived for the weighted and unweighted kappa statistics, the conditional kappa statistic, and conditional probabilities. These statistics are useful to assess classification accuracy, such as accuracy of remotely sensed classifications in thematic maps when compared to a sample of reference classifications made in the field. Published...

  16. Consequences of environmental and biological variances for range margins: a spatially explicit theoretical model

    NASA Astrophysics Data System (ADS)

    Malanson, G. P.; DeRose, R. J.; Bekker, M. F.

    2016-12-01

    The consequences of increasing climatic variance while including variability among individuals and populations are explored for range margins of species with a spatially explicit simulation. The model has a single environmental gradient and a single species then extended to two species. Species response to the environment is a Gaussian function with a peak of 1.0 at their peak fitness on the gradient. The variance in the environment is taken from the total variance in the tree ring series of 399 individuals of Pinus edulis in FIA plots in the western USA. The variability is increased by a multiplier of the standard deviation for various doubling times. The variance of individuals in the simulation is drawn from these same series. Inheritance of individual variability is based on the geographic locations of the individuals. The variance for P. edulis is recomputed as time-dependent conditional standard deviations using the GARCH procedure. Establishment and mortality are simulated in a Monte Carlo process with individual variance. Variance for P. edulis does not show a consistent pattern of heteroscedasticity. An obvious result is that increasing variance has deleterious effects on species persistence because extreme events that result in extinctions cannot be balanced by positive anomalies, but even less extreme negative events cannot be balanced by positive anomalies because of biological and spatial constraints. In the two species model the superior competitor is more affected by increasing climatic variance because its response function is steeper at the point of intersection with the other species and so the uncompensated effects of negative anomalies are greater for it. These theoretical results can guide the anticipated need to mitigate the effects of increasing climatic variability on P. edulis range margins. The trailing edge, here subject to increasing drought stress with increasing temperatures, will be more affected by negative anomalies.

  17. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  18. 40 CFR 142.307 - What terms and conditions must be included in a small system variance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... improvements to comply with the small system variance technology, secure an alternative source of water, or... included in a small system variance? 142.307 Section 142.307 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System Review of Small System Variance Application § 142.307 What terms and...

  19. 40 CFR 142.307 - What terms and conditions must be included in a small system variance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... improvements to comply with the small system variance technology, secure an alternative source of water, or... included in a small system variance? 142.307 Section 142.307 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System Review of Small System Variance Application § 142.307 What terms and...

  20. 40 CFR 142.307 - What terms and conditions must be included in a small system variance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... improvements to comply with the small system variance technology, secure an alternative source of water, or... included in a small system variance? 142.307 Section 142.307 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System Review of Small System Variance Application § 142.307 What terms and...

  1. 40 CFR 142.307 - What terms and conditions must be included in a small system variance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... improvements to comply with the small system variance technology, secure an alternative source of water, or... included in a small system variance? 142.307 Section 142.307 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System Review of Small System Variance Application § 142.307 What terms and...

  2. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.

  3. Genetic Variance in the SES-IQ Correlation.

    ERIC Educational Resources Information Center

    Eckland, Bruce K.

    1979-01-01

    Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)

  4. Formative Use of Intuitive Analysis of Variance

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…

  5. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...

  6. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 41 Public Contracts and Property Management 1 2013-07-01 2013-07-01 false Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...

  7. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 1 2011-07-01 2009-07-01 true Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...

  8. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...

  9. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...

  10. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  11. 29 CFR 1905.10 - Variances and other relief under section 6(b)(6)(A).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Variances and other relief under section 6(b)(6)(A). 1905... section 6(b)(6)(A). (a) Application for variance. Any employer, or class of employers, desiring a variance from a standard, or portion thereof, authorized by section 6(b)(6)(A) of the Act may file a written...

  12. A new variance stabilizing transformation for gene expression data analysis.

    PubMed

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  13. Effect of ground skidding on oak advance regeneration

    Treesearch

    Jeffrey W. Stringer

    2006-01-01

    Vigorous advance regeneration is required to naturally regenerate oaks. However, a reduction in the number of advance regeneration stems from harvesting activities could be an important factor in determining successful oak regeneration. This study assessed the harvest survivability of advance regeneration of oak (Quercus spp.) and co-occurring...

  14. Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter

    PubMed Central

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-01

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903

  15. Determining the bias and variance of a deterministic finger-tracking algorithm.

    PubMed

    Morash, Valerie S; van der Velden, Bas H M

    2016-06-01

    Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.

  16. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    PubMed

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  17. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  18. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    PubMed

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  19. River meanders - Theory of minimum variance

    USGS Publications Warehouse

    Langbein, Walter Basil; Leopold, Luna Bergere

    1966-01-01

    Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.

  20. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  1. Estimation of bias and variance of measurements made from tomography scans

    NASA Astrophysics Data System (ADS)

    Bradley, Robert S.

    2016-09-01

    Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.

  2. Application of advanced technologies to small, short-haul aircraft

    NASA Technical Reports Server (NTRS)

    Andrews, D. G.; Brubaker, P. W.; Bryant, S. L.; Clay, C. W.; Giridharadas, B.; Hamamoto, M.; Kelly, T. J.; Proctor, D. K.; Myron, C. E.; Sullivan, R. L.

    1978-01-01

    The results of a preliminary design study which investigates the use of selected advanced technologies to achieve low cost design for small (50-passenger), short haul (50 to 1000 mile) transports are reported. The largest single item in the cost of manufacturing an airplane of this type is labor. A careful examination of advanced technology to airframe structure was performed since one of the most labor-intensive parts of the airplane is structures. Also, preliminary investigation of advanced aerodynamics flight controls, ride control and gust load alleviation systems, aircraft systems and turbo-prop propulsion systems was performed. The most beneficial advanced technology examined was bonded aluminum primary structure. The use of this structure in large wing panels and body sections resulted in a greatly reduced number of parts and fasteners and therefore, labor hours. The resultant cost of assembled airplane structure was reduced by 40% and the total airplane manufacturing cost by 16% - a major cost reduction. With further development, test verification and optimization appreciable weight saving is also achievable. Other advanced technology items which showed significant gains are as follows: (1) advanced turboprop-reduced block fuel by 15.30% depending on range; (2) configuration revisions (vee-tail)-empennage cost reduction of 25%; (3) leading-edge flap addition-weight reduction of 2500 pounds.

  3. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    PubMed

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  4. Consistent Small-Sample Variances for Six Gamma-Family Measures of Ordinal Association

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2009-01-01

    Gamma-family measures are bivariate ordinal correlation measures that form a family because they all reduce to Goodman and Kruskal's gamma in the absence of ties (1954). For several gamma-family indices, more than one variance estimator has been introduced. In previous research, the "consistent" variance estimator described by Cliff and…

  5. Optimal control of LQG problem with an explicit trade-off between mean and variance

    NASA Astrophysics Data System (ADS)

    Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang

    2011-12-01

    For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.

  6. Advances in the management of dyslipidemia.

    PubMed

    Kampangkaew, June; Pickett, Stephen; Nambi, Vijay

    2017-07-01

    Cardiovascular disease is the leading cause of morbidity and mortality in the United States and therapies aimed at lipid modification are important for the reduction of cardiovascular risk. There have been many exciting advances in lipid management over the recent years. This review discusses these recent advances as well as the direction of future studies. Several recent clinical trials support low-density lipoprotein cholesterol (LDL-c) reduction beyond maximal statin therapy for improved cardiovascular outcomes. Ezetimibe reduced LDL-c beyond maximal statin therapy and was associated with improved cardiovascular outcomes for high-risk populations. Further LDL-c reduction may also be achieved with proprotein convertase subtilisin/kexin type-9 (PCSK9) inhibition and a recent trial, Further Cardiovascular Outcomes Research with PCSK9 Inhibition in Subjects with Elevated Risk (FOURIER), was the first to show reduction in cardiovascular events for evolocumab. Additional outcome studies of monoclonal antibody and RNA-targeted PCSK9 inhibitors are underway. Quantitative high-density lipoprotein cholesterol (HDL-c) improvements have failed to have clinical impact to date; most recently, cholesteryl ester transfer protein inhibitors and apolipoprotein infusions have demonstrated disappointing results. There are still ongoing trials in both of these areas, but some newer therapies are focusing on HDL functionality and not just the absolute HDL-c levels. There are several ongoing studies in triglyceride reduction including fatty acid therapy, inhibition of apolipoprotein C-3 or ANGTPL3 and peroxisome proliferator-activated receptor-α agonists. Lipid management continues to evolve and these advances have the potential to change clinical practice in the coming years.

  7. Decomposing variation in male reproductive success: age-specific variances and covariances through extra-pair and within-pair reproduction.

    PubMed

    Lebigre, Christophe; Arcese, Peter; Reid, Jane M

    2013-07-01

    Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased

  8. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...

  9. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...

  10. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...

  11. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...

  12. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...

  13. Jackknife variance of the partial area under the empirical receiver operating characteristic curve.

    PubMed

    Bandos, Andriy I; Guo, Ben; Gur, David

    2017-04-01

    Receiver operating characteristic analysis provides an important methodology for assessing traditional (e.g., imaging technologies and clinical practices) and new (e.g., genomic studies, biomarker development) diagnostic problems. The area under the clinically/practically relevant part of the receiver operating characteristic curve (partial area or partial area under the receiver operating characteristic curve) is an important performance index summarizing diagnostic accuracy at multiple operating points (decision thresholds) that are relevant to actual clinical practice. A robust estimate of the partial area under the receiver operating characteristic curve is provided by the area under the corresponding part of the empirical receiver operating characteristic curve. We derive a closed-form expression for the jackknife variance of the partial area under the empirical receiver operating characteristic curve. Using the derived analytical expression, we investigate the differences between the jackknife variance and a conventional variance estimator. The relative properties in finite samples are demonstrated in a simulation study. The developed formula enables an easy way to estimate the variance of the empirical partial area under the receiver operating characteristic curve, thereby substantially reducing the computation burden, and provides important insight into the structure of the variability. We demonstrate that when compared with the conventional approach, the jackknife variance has substantially smaller bias, and leads to a more appropriate type I error rate of the Wald-type test. The use of the jackknife variance is illustrated in the analysis of a data set from a diagnostic imaging study.

  14. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  15. 76 FR 30027 - Land Disposal Restrictions: Site-Specific Treatment Variance for Hazardous Selenium-Bearing Waste...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-24

    ... Restrictions: Site-Specific Treatment Variance for Hazardous Selenium-Bearing Waste Treated by U.S. Ecology.... Ecology Nevada in Beatty, Nevada and withdrew an existing site- specific treatment variance issued to... 268.44(o)) by granting a site-specific treatment variance to U.S. Ecology Nevada in Beatty, Nevada and...

  16. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  17. Contrast media enhancement reduction predicts tumor response to presurgical molecular-targeting therapy in patients with advanced renal cell carcinoma.

    PubMed

    Hosogoe, Shogo; Hatakeyama, Shingo; Kusaka, Ayumu; Hamano, Itsuto; Tanaka, Yoshimi; Hagiwara, Kazuhisa; Hirai, Hideaki; Morohashi, Satoko; Kijima, Hiroshi; Yamamoto, Hayato; Tobisawa, Yuki; Yoneyama, Tohru; Yoneyama, Takahiro; Hashimoto, Yasuhiro; Koie, Takuya; Ohyama, Chikara

    2017-07-25

    A quantitative tumor response evaluation to molecular-targeting agents in advanced renal cell carcinoma (RCC) is debatable. We aimed to evaluate the relationship between radiologic tumor response and pathological response in patients with advanced RCC who underwent presurgical therapy. Of 34 patients, 31 underwent scheduled radical nephrectomy. Presurgical therapy agents included axitinib (n = 26), everolimus (n = 3), sunitinib (n = 1), and axitinib followed by temsirolimus (n = 1). The major presurgical treatment-related adverse event was grade 2 or 3 hypertension (44%). The median radiologic tumor response by RECIST, Choi, and CMER were -19%, -24%, and -49%, respectively. Among the radiologic tumor response tests, CMER showed a higher association with tumor necrosis in surgical specimens than others. Ki67/MIB1 status was significantly decreased in surgical specimens than in biopsy specimens. The magnitude of the slope of the regression line associated with the tumor necrosis percentage was greater in CMER than in Choi and RECIST. Between March 2012 and December 2016, we prospectively enrolled 34 locally advanced and/or metastatic RCC who underwent presurgical molecular-targeting therapy followed by radical nephrectomy. Primary endpoint was comparison of radiologic tumor response among Response Evaluation Criteria in Solid Tumors (RECIST), Choi, and contrast media enhancement reduction (CMER). Secondary endpoint included pathological downstaging, treatment related adverse events, postoperative complications, Ki67/MIB1 status, and tumor necrosis. CMER may predict tumor response after presurgical molecular-targeting therapy. Larger prospective studies are needed to develop an optimal tumor response evaluation for molecular-targeting therapy.

  18. Variance analysis refines overhead cost control.

    PubMed

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  19. On the additive and dominant variance and covariance of individuals within the genomic selection scope.

    PubMed

    Vitezica, Zulma G; Varona, Luis; Legarra, Andres

    2013-12-01

    Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or "breeding" values of individuals are generated by substitution effects, which involve both "biological" additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the "genotypic" value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts.

  20. Testing Small Variance Priors Using Prior-Posterior Predictive p Values.

    PubMed

    Hoijtink, Herbert; van de Schoot, Rens

    2017-04-03

    Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    ERIC Educational Resources Information Center

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  2. Response variance in functional maps: neural darwinism revisited.

    PubMed

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  3. Response Variance in Functional Maps: Neural Darwinism Revisited

    PubMed Central

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population. PMID:23874733

  4. Noise impact of advanced high lift systems

    NASA Technical Reports Server (NTRS)

    Elmer, Kevin R.; Joshi, Mahendra C.

    1995-01-01

    The impact of advanced high lift systems on aircraft size, performance, direct operating cost and noise were evaluated for short-to-medium and medium-to-long range aircraft with high bypass ratio and very high bypass ratio engines. The benefit of advanced high lift systems in reducing noise was found to be less than 1 effective-perceived-noise decibel level (EPNdB) when the aircraft were sized to minimize takeoff gross weight. These aircraft did, however, have smaller wings and lower engine thrusts for the same mission than aircraft with conventional high lift systems. When the advanced high lift system was implemented without reducing wing size and simultaneously using lower flap angles that provide higher L/D at approach a cumulative noise reduction of as much as 4 EPNdB was obtained. Comparison of aircraft configurations that have similar approach speeds showed cumulative noise reduction of 2.6 EPNdB that is purely the result of incorporating advanced high lift system in the aircraft design.

  5. Technologies for Aircraft Noise Reduction

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.

    2006-01-01

    Technologies for aircraft noise reduction have been developed by NASA over the past 15 years through the Advanced Subsonic Technology (AST) Noise Reduction Program and the Quiet Aircraft Technology (QAT) project. This presentation summarizes highlights from these programs and anticipated noise reduction benefits for communities surrounding airports. Historical progress in noise reduction and technologies available for future aircraft/engine development are identified. Technologies address aircraft/engine components including fans, exhaust nozzles, landing gear, and flap systems. New "chevron" nozzles have been developed and implemented on several aircraft in production today that provide significant jet noise reduction. New engines using Ultra-High Bypass (UHB) ratios are projected to provide about 10 EPNdB (Effective Perceived Noise Level in decibels) engine noise reduction relative to the average fleet that was flying in 1997. Audio files are embedded in the presentation that estimate the sound levels for a 35,000 pound thrust engine for takeoff and approach power conditions. The predictions are based on actual model scale data that was obtained by NASA. Finally, conceptual pictures are shown that look toward future aircraft/propulsion systems that might be used to obtain further noise reduction.

  6. Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.

    The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.

  7. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    NASA Astrophysics Data System (ADS)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  8. The distribution of genetic variance across phenotypic space and the response to selection.

    PubMed

    Blows, Mark W; McGuigan, Katrina

    2015-05-01

    The role of adaptation in biological invasions will depend on the availability of genetic variation for traits under selection in the new environment. Although genetic variation is present for most traits in most populations, selection is expected to act on combinations of traits, not individual traits in isolation. The distribution of genetic variance across trait combinations can be characterized by the empirical spectral distribution of the genetic variance-covariance (G) matrix. Empirical spectral distributions of G from a range of trait types and taxa all exhibit a characteristic shape; some trait combinations have large levels of genetic variance, while others have very little genetic variance. In this study, we review what is known about the empirical spectral distribution of G and show how it predicts the response to selection across phenotypic space. In particular, trait combinations that form a nearly null genetic subspace with little genetic variance respond only inconsistently to selection. We go on to set out a framework for understanding how the empirical spectral distribution of G may differ from the random expectations that have been developed under random matrix theory (RMT). Using a data set containing a large number of gene expression traits, we illustrate how hypotheses concerning the distribution of multivariate genetic variance can be tested using RMT methods. We suggest that the relative alignment between novel selection pressures during invasion and the nearly null genetic subspace is likely to be an important component of the success or failure of invasion, and for the likelihood of rapid adaptation in small populations in general. © 2014 John Wiley & Sons Ltd.

  9. Thermal noise variance of a receive radiofrequency coil as a respiratory motion sensor.

    PubMed

    Andreychenko, A; Raaijmakers, A J E; Sbrizzi, A; Crijns, S P M; Lagendijk, J J W; Luijten, P R; van den Berg, C A T

    2017-01-01

    Development of a passive respiratory motion sensor based on the noise variance of the receive coil array. Respiratory motion alters the body resistance. The noise variance of an RF coil depends on the body resistance and, thus, is also modulated by respiration. For the noise variance monitoring, the noise samples were acquired without and with MR signal excitation on clinical 1.5/3 T MR scanners. The performance of the noise sensor was compared with the respiratory bellow and with the diaphragm displacement visible on MR images. Several breathing patterns were tested. The noise variance demonstrated a periodic, temporal modulation that was synchronized with the respiratory bellow signal. The modulation depth of the noise variance resulting from the respiration varied between the channels of the array and depended on the channel's location with respect to the body. The noise sensor combined with MR acquisition was able to detect the respiratory motion for every k-space read-out line. Within clinical MR systems, the respiratory motion can be detected by the noise in receive array. The noise sensor does not require careful positioning unlike the bellow, any additional hardware, and/or MR acquisition. Magn Reson Med 77:221-228, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Serum adiponectin is increased in advancing liver fibrosis and declines with reduction in fibrosis in chronic hepatitis B.

    PubMed

    Hui, Chee-Kin; Zhang, Hai-Ying; Lee, Nikki P; Chan, Weng; Yueng, Yui-Hung; Leung, Kar-Wai; Lu, Lei; Leung, Nancy; Lo, Chung-Mau; Fan, Sheung-Tat; Luk, John M; Xu, Aimin; Lam, Karen S; Kwong, Yok-Lam; Lau, George K K

    2007-08-01

    Despite the possible role of adiponectin in the pathogenesis of liver cirrhosis, few data have been collected from patients in different stages of liver fibrosis. We studied the role of adiponectin in 2 chronic hepatitis B (CHB)-patient cohorts. Serum adiponectin was quantified by enzyme-linked immunosorbent assay. One-hundred liver biopsy specimens from CHB patients with different stages of fibrosis and 38 paired liver biopsies from hepatitis B e antigen-positive patients randomized to lamivudine (n=15), pegylated interferon alfa-2a (n=15) or pegylated interferon alfa-2a plus lamivudine (n=8) therapy for 48 weeks were assessed. Serum adiponectin was detected at levels ranging over fourfold magnitude with advancing fibrosis stage and correlated positively with fibrosis stage [r=0.45, p<0.001]. CHB patients with stage 0-1 fibrosis had higher composition of high molecular weight (HMW) form of adiponectin when compared with CHB patients with liver cirrhosis [mean+/-SEM 51.2+/-2.1% vs. 40.9+/-1.7%, respectively, p=0.001]. After antiviral therapy, patients with fibrosis reduction had marked decline in serum adiponectin level and increase in HMW form of adiponectin [mean+/-SEM 43.5+/-1.2% vs. 37.0+/-3.0%, respectively, p=0.04]. Serum adiponectin may have a role in fibrosis progression in CHB infection. A marked decline in serum adiponectin after antiviral therapy is associated with fibrosis reduction.

  11. Long-Haul Truck Sleeper Heating Load Reduction Package for Rest Period Idling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lustbader, Jason Aaron; Kekelia, Bidzina; Tomerlin, Jeff

    Annual fuel use for sleeper cab truck rest period idling is estimated at 667 million gallons in the United States, or 6.8% of long-haul truck fuel use. Truck idling during a rest period represents zero freight efficiency and is largely done to supply accessory power for climate conditioning of the cab. The National Renewable Energy Laboratory's CoolCab project aims to reduce heating, ventilating, and air conditioning (HVAC) loads and resulting fuel use from rest period idling by working closely with industry to design efficient long-haul truck thermal management systems while maintaining occupant comfort. Enhancing the thermal performance of cab/sleepers willmore » enable smaller, lighter, and more cost-effective idle reduction solutions. In addition, if the fuel savings provide a one- to three-year payback period, fleet owners will be economically motivated to incorporate them. For candidate idle reduction technologies to be implemented by original equipment manufacturers and fleets, their effectiveness must be quantified. To address this need, several promising candidate technologies were evaluated through experimentation and modeling to determine their effectiveness in reducing rest period HVAC loads. Load reduction strategies were grouped into the focus areas of solar envelope, occupant environment, conductive pathways, and efficient equipment. Technologies in each of these focus areas were investigated in collaboration with industry partners. The most promising of these technologies were then combined with the goal of exceeding a 30% reduction in HVAC loads. These technologies included 'ultra-white' paint, advanced insulation, and advanced curtain design. Previous testing showed more than a 35.7% reduction in air conditioning loads. This paper describes the overall heat transfer coefficient testing of this advanced load reduction technology package that showed more than a 43% reduction in heating load. Adding an additional layer of advanced insulation with a

  12. High-Speed Jet Noise Reduction NASA Perspective

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.; Handy, J. (Technical Monitor)

    2001-01-01

    History shows that the problem of high-speed jet noise reduction is difficult to solve. the good news is that high performance military aircraft noise is dominated by a single source called 'jet noise' (commercial aircraft have several sources). The bad news is that this source has been the subject of research for the past 50 years and progress has been incremental. Major jet noise reduction has been achieved through changing the cycle of the engine to reduce the jet exit velocity. Smaller reductions have been achieved using suppression devices like mixing enhancement and acoustic liners. Significant jet noise reduction without any performance loss is probably not possible! Recent NASA Noise Reduction Research Programs include the High Speed Research Program, Advanced Subsonic Technology Noise Reduction Program, Aerospace Propulsion and Power Program - Fundamental Noise, and Quiet Aircraft Technology Program.

  13. Dominance, Information, and Hierarchical Scaling of Variance Space.

    ERIC Educational Resources Information Center

    Ceurvorst, Robert W.; Krus, David J.

    1979-01-01

    A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)

  14. An overview of advanced reduction processes for bromate removal from drinking water: Reducing agents, activation methods, applications and mechanisms.

    PubMed

    Xiao, Qian; Yu, Shuili; Li, Lei; Wang, Ting; Liao, Xinlei; Ye, Yubing

    2017-02-15

    Bromate (BrO 3 - ) is a possible human carcinogen regulated at a strict standard of 10μg/L in drinking water. Various techniques to eliminate BrO 3 - usually fall into three main categories: reducing bromide (Br - ) prior to formation of BrO 3 - , minimizing BrO 3 - formation during the ozonation process, and removing BrO 3 - from post-ozonation waters. However, the first two approaches exhibit low degradation efficiency and high treatment cost. The third workaround has obvious advantages, such as high reduction efficiency, more stable performance and easier combination with UV disinfection, and has therefore been widely implemented in water treatment. Recently, advanced reduction processes (ARPs), the photocatalysis of BrO 3 - , have attracted much attention due to improved performance. To increase the feasibility of photocatalytic systems, the focus of this work concerns new technological developments, followed by a summary of reducing agents, activation methods, operational parameters, and applications. The reaction mechanisms of two typical processes involving UV/sulfite homogeneous photocatalysis and UV/titanium dioxide heterogeneous photocatalysis are further summarized. The future research needs for ARPs to reach full-scale potential in drinking water treatment are suggested accordingly. Copyright © 2016. Published by Elsevier B.V.

  15. [Analysis of variance of repeated data measured by water maze with SPSS].

    PubMed

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (Pvariance. Repeated measurement effect or its interactive effect with treated group could be evaluated by estimating within-subject variance. The method of Bonferroni should be used to do pairwise comparisons of the repeatedly measured data in different measurement time of each treated group. With multivariate ANOVA, data in different treated group of each measurement time could be compared pairwise. The repeated measures process of the general linear model is suitable for variance analysis of repeatedly measured data. SPSS statistical package is available to fulfil this process.

  16. Estimating means and variances: The comparative efficiency of composite and grab samples.

    PubMed

    Brumelle, S; Nemetz, P; Casey, D

    1984-03-01

    This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.

  17. Deconstructing risk: Separable encoding of variance and skewness in the brain

    PubMed Central

    Symmonds, Mkael; Wright, Nicholas D.; Bach, Dominik R.; Dolan, Raymond J.

    2011-01-01

    Risky choice entails a need to appraise all possible outcomes and integrate this information with individual risk preference. Risk is frequently quantified solely by statistical variance of outcomes, but here we provide evidence that individuals’ choice behaviour is sensitive to both dispersion (variance) and asymmetry (skewness) of outcomes. Using a novel behavioural paradigm in humans, we independently manipulated these ‘summary statistics’ while scanning subjects with fMRI. We show that a behavioural sensitivity to variance and skewness is mirrored in neuroanatomically dissociable representations of these quantities, with parietal cortex showing sensitivity to the former and prefrontal cortex and ventral striatum to the latter. Furthermore, integration of these objective risk metrics with subjective risk preference is expressed in a subject-specific coupling between neural activity and choice behaviour in anterior insula. Our findings show that risk is neither monolithic from a behavioural nor neural perspective and its decomposition is evident both in distinct behavioural preferences and in segregated underlying brain representations. PMID:21763444

  18. Estimation variance bounds of importance sampling simulations in digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  19. Empirical single sample quantification of bias and variance in Q-ball imaging.

    PubMed

    Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A

    2018-02-06

    The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.

  20. Intuitive Analysis of Variance-- A Formative Assessment Approach

    ERIC Educational Resources Information Center

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  1. Comparison of amplitude-decorrelation, speckle-variance and phase-variance OCT angiography methods for imaging the human retina and choroid

    PubMed Central

    Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.

    2016-01-01

    We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598

  2. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  3. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  4. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  5. Analytical pricing formulas for hybrid variance swaps with regime-switching

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun

    2017-11-01

    The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.

  6. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  7. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed Central

    Patlak, J B

    1993-01-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  8. Effect of advanced aircraft noise reduction technology on the 1990 projected noise environment around Patrick Henry Airport. [development of noise exposure forecast contours for projected traffic volume and aircraft types

    NASA Technical Reports Server (NTRS)

    Cawthorn, J. M.; Brown, C. G.

    1974-01-01

    A study has been conducted of the future noise environment of Patric Henry Airport and its neighboring communities projected for the year 1990. An assessment was made of the impact of advanced noise reduction technologies which are currently being considered. These advanced technologies include a two-segment landing approach procedure and aircraft hardware modifications or retrofits which would add sound absorbent material in the nacelles of the engines or which would replace the present two- and three-stage fans with a single-stage fan of larger diameter. Noise Exposure Forecast (NEF) contours were computed for the baseline (nonretrofitted) aircraft for the projected traffic volume and fleet mix for the year 1990. These NEF contours are presented along with contours for a variety of retrofit options. Comparisons of the baseline with the noise reduction options are given in terms of total land area exposed to 30 and 40 NEF levels. Results are also presented of the effects on noise exposure area of the total number of daily operations.

  9. Can currently available advanced combustion biomass cook-stoves provide health relevant exposure reductions? Results from initial assessment of select commercial models in India.

    PubMed

    Sambandam, Sankar; Balakrishnan, Kalpana; Ghosh, Santu; Sadasivam, Arulselvan; Madhav, Satish; Ramasamy, Rengaraj; Samanta, Maitreya; Mukhopadhyay, Krishnendu; Rehman, Hafeez; Ramanathan, Veerabhadran

    2015-03-01

    Household air pollution from use of solid fuels is a major contributor to the national burden of disease in India. Currently available models of advanced combustion biomass cook-stoves (ACS) report significantly higher efficiencies and lower emissions in the laboratory when compared to traditional cook-stoves, but relatively little is known about household level exposure reductions, achieved under routine conditions of use. We report results from initial field assessments of six commercial ACS models from the states of Tamil Nadu and Uttar Pradesh in India. We monitored 72 households (divided into six arms to each receive an ACS model) for 24-h kitchen area concentrations of PM2.5 and CO before and (1-6 months) after installation of the new stove together with detailed information on fixed and time-varying household characteristics. Detailed surveys collected information on user perceptions regarding acceptability for routine use. While the median percent reductions in 24-h PM2.5 and CO concentrations ranged from 2 to 71% and 10-66%, respectively, concentrations consistently exceeded WHO air quality guideline values across all models raising questions regarding the health relevance of such reductions. Most models were perceived to be sub-optimally designed for routine use often resulting in inappropriate and inadequate levels of use. Household concentration reductions also run the risk of being compromised by high ambient backgrounds from community level solid-fuel use and contributions from surrounding fossil fuel sources. Results indicate that achieving health relevant exposure reductions in solid-fuel using households will require integration of emissions reductions with ease of use and adoption at community scale, in cook-stove technologies. Imminent efforts are also needed to accelerate the progress towards cleaner fuels.

  10. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  11. Unbiased Estimates of Variance Components with Bootstrap Procedures

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  12. Trait and State Variance in Oppositional Defiant Disorder Symptoms: A Multi-Source Investigation with Spanish Children

    PubMed Central

    Preszler, Jonathan; Burns, G. Leonard; Litson, Kaylee; Geiser, Christian; Servera, Mateu

    2016-01-01

    The objective was to determine and compare the trait and state components of oppositional defiant disorder (ODD) symptom reports across multiple informants. Mothers, fathers, primary teachers, and secondary teachers rated the occurrence of the ODD symptoms in 810 Spanish children (55% boys) on two occasions (end first and second grades). Single source latent state-trait (LST) analyses revealed that ODD symptom ratings from all four sources showed more trait (M = 63%) than state residual (M = 37%) variance. A multiple source LST analysis revealed substantial convergent validity of mothers’ and fathers’ trait variance components (M = 68%) and modest convergent validity of state residual variance components (M = 35%). In contrast, primary and secondary teachers showed low convergent validity relative to mothers for trait variance (Ms = 31%, 32%, respectively) and essentially zero convergent validity relative to mothers for state residual variance (Ms = 1%, 3%, respectively). Although ODD symptom ratings reflected slightly more trait- than state-like constructs within each of the four sources separately across occasions, strong convergent validity for the trait variance only occurred within settings (i.e., mothers with fathers; primary with secondary teachers) with the convergent validity of the trait and state residual variance components being low to non-existent across settings. These results suggest that ODD symptom reports are trait-like across time for individual sources with this trait variance, however, only having convergent validity within settings. Implications for assessment of ODD are discussed. PMID:27148784

  13. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  14. Preliminary Study of Advanced Turboprops for Low Energy Consumption

    NASA Technical Reports Server (NTRS)

    Kraft, G. A.; Strack, W. C.

    1975-01-01

    The fuel savings potential of advanced turboprops (operational about 1985) was calculated and compared with that of an advanced turbofan for use in an advanced subsonic transport. At the design point, altitude 10.67 km and Mach 0.80, turbine-inlet temperature was fixed at 1590 K while overall pressure ratio was varied from 25 to 50. The regenerative turboprop had a pressure ratio of only 10 and an 85 percent effective rotary heat exchanger. Variable camber propellers were used with an efficiency of 85 percent. The study indicated a fuel savings of 33 percent, a takeoff gross weight reduction of 15 percent, and a direct operating cost reduction of 18 percent was possible when turboprops were used instead of the reference turbofan at a range of 10 200 km. These reductions were 28, 11, and 14 percent, respectively, at a range of 5500 km. Increasing overall pressure ratio from 25 to 50 saved little fuel and slightly increased takeoff gross weight.

  15. Advanced Noise Control Fan (ANCF)

    NASA Image and Video Library

    2014-01-15

    The Advanced Noise Control Fan shown here is located in NASA Glenn’s Aero-Acoustic Propulsion Laboratory. The 4-foot diameter fan is used to evaluate innovate aircraft engine noise reduction concepts less expensively and more quickly.

  16. External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Niedra, Janis M.; Geng, Steven M.

    2013-01-01

    Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.

  17. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  18. Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis

    ERIC Educational Resources Information Center

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia

    2016-01-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…

  19. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  20. Long-Haul Truck Sleeper Heating Load Reduction Package for Rest Period Idling: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lustbader, Jason; Kekelia, Bidzina; Tomerlin, Jeff

    Annual fuel use for sleeper cab truck rest period idling is estimated at 667 million gallons in the United States, or 6.8% of long-haul truck fuel use. Truck idling during a rest period represents zero freight efficiency and is largely done to supply accessory power for climate conditioning of the cab. The National Renewable Energy Laboratory's CoolCab project aims to reduce heating, ventilating, and air conditioning (HVAC) loads and resulting fuel use from rest period idling by working closely with industry to design efficient long-haul truck thermal management systems while maintaining occupant comfort. Enhancing the thermal performance of cab/sleepers willmore » enable smaller, lighter, and more cost-effective idle reduction solutions. In addition, if the fuel savings provide a one- to three-year payback period, fleet owners will be economically motivated to incorporate them. For candidate idle reduction technologies to be implemented by original equipment manufacturers and fleets, their effectiveness must be quantified. To address this need, several promising candidate technologies were evaluated through experimentation and modeling to determine their effectiveness in reducing rest period HVAC loads. Load reduction strategies were grouped into the focus areas of solar envelope, occupant environment, conductive pathways, and efficient equipment. Technologies in each of these focus areas were investigated in collaboration with industry partners. The most promising of these technologies were then combined with the goal of exceeding a 30% reduction in HVAC loads. These technologies included 'ultra-white' paint, advanced insulation, and advanced curtain design. Previous testing showed more than a 35.7% reduction in air conditioning loads. This paper describes the overall heat transfer coefficient testing of this advanced load reduction technology package that showed more than a 43% reduction in heating load. Adding an additional layer of advanced insulation with a

  1. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  2. 40 CFR 142.43 - Disposition of a variance request.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... no access to an alternative raw water source, and can effect or anticipate no adequate improvement of....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the...

  3. 40 CFR 142.42 - Consideration of a variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... water source, the Administrator shall consider such factors as the following: (1) The availability and.... 142.42 Section 142.42 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the...

  4. Measuring self-rated productivity: factor structure and variance component analysis of the Health and Work Questionnaire.

    PubMed

    von Thiele Schwarz, Ulrica; Sjöberg, Anders; Hasson, Henna; Tafvelin, Susanne

    2014-12-01

    To test the factor structure and variance components of the productivity subscales of the Health and Work Questionnaire (HWQ). A total of 272 individuals from one company answered the HWQ scale, including three dimensions (efficiency, quality, and quantity) that the respondent rated from three perspectives: their own, their supervisor's, and their coworkers'. A confirmatory factor analysis was performed, and common and unique variance components evaluated. A common factor explained 81% of the variance (reliability 0.95). All dimensions and rater perspectives contributed with unique variance. The final model provided a perfect fit to the data. Efficiency, quality, and quantity and three rater perspectives are valid parts of the self-rated productivity measurement model, but with a large common factor. Thus, the HWQ can be analyzed either as one factor or by extracting the unique variance for each subdimension.

  5. 40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1) Variances and exemptions from the maximum contaminant levels for combined radium-226 and radium-228, uranium...

  6. 40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1) Variances and exemptions from the maximum contaminant levels for combined radium-226 and radium-228, uranium...

  7. 40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1) Variances and exemptions from the maximum contaminant levels for combined radium-226 and radium-228, uranium...

  8. 40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1) Variances and exemptions from the maximum contaminant levels for combined radium-226 and radium-228, uranium...

  9. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  10. New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2010-01-01

    Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).

  11. On zero variance Monte Carlo path-stretching schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lux, I.

    1983-08-01

    A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game.

  12. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... may submit a variance petition to the Assistant Secretary of Labor for Occupational Safety and Health...

  13. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... may submit a variance petition to the Assistant Secretary of Labor for Occupational Safety and Health...

  14. Variability, trends, and predictability of seasonal sea ice retreat and advance in the Chukchi Sea

    NASA Astrophysics Data System (ADS)

    Serreze, Mark C.; Crawford, Alex D.; Stroeve, Julienne C.; Barrett, Andrew P.; Woodgate, Rebecca A.

    2016-10-01

    As assessed over the period 1979-2014, the date that sea ice retreats to the shelf break (150 m contour) of the Chukchi Sea has a linear trend of -0.7 days per year. The date of seasonal ice advance back to the shelf break has a steeper trend of about +1.5 days per year, together yielding an increase in the open water period of 80 days. Based on detrended time series, we ask how interannual variability in advance and retreat dates relate to various forcing parameters including radiation fluxes, temperature and wind (from numerical reanalyses), and the oceanic heat inflow through the Bering Strait (from in situ moorings). Of all variables considered, the retreat date is most strongly correlated (r ˜ 0.8) with the April through June Bering Strait heat inflow. After testing a suite of statistical linear models using several potential predictors, the best model for predicting the date of retreat includes only the April through June Bering Strait heat inflow, which explains 68% of retreat date variance. The best model predicting the ice advance date includes the July through September inflow and the date of retreat, explaining 67% of advance date variance. We address these relationships by discussing heat balances within the Chukchi Sea, and the hypothesis of oceanic heat transport triggering ocean heat uptake and ice-albedo feedback. Developing an operational prediction scheme for seasonal retreat and advance would require timely acquisition of Bering Strait heat inflow data. Predictability will likely always be limited by the chaotic nature of atmospheric circulation patterns.

  15. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    PubMed

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  16. Fidelity between Gaussian mixed states with quantum state quadrature variances

    NASA Astrophysics Data System (ADS)

    Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao

    2016-04-01

    In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).

  17. A log-sinh transformation for data normalization and variance stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  18. Cosmic variance of the galaxy cluster weak lensing signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruen, D.; Seitz, S.; Becker, M. R.

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less

  19. Cosmic variance of the galaxy cluster weak lensing signal

    DOE PAGES

    Gruen, D.; Seitz, S.; Becker, M. R.; ...

    2015-04-13

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less

  20. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    PubMed

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  1. A Nonparametric Mean-Variance Smoothing Method to Assess Arabidopsis Cold Stress Transcriptional Regulator CBF2 Overexpression Microarray Data

    PubMed Central

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request. PMID:21611181

  2. Assessment of the Performance Potential of Advanced Subsonic Transport Concepts for NASA's Environmentally Responsible Aviation Project

    NASA Technical Reports Server (NTRS)

    Nickol, Craig L.; Haller, William J.

    2016-01-01

    NASA's Environmentally Responsible Aviation (ERA) project has matured technologies to enable simultaneous reductions in fuel burn, noise, and nitrogen oxide (NOx) emissions for future subsonic commercial transport aircraft. The fuel burn reduction target was a 50% reduction in block fuel burn (relative to a 2005 best-in-class baseline aircraft), utilizing technologies with an estimated Technology Readiness Level (TRL) of 4-6 by 2020. Progress towards this fuel burn reduction target was measured through the conceptual design and analysis of advanced subsonic commercial transport concepts spanning vehicle size classes from regional jet (98 passengers) to very large twin aisle size (400 passengers). Both conventional tube-and-wing (T+W) concepts and unconventional (over-wing-nacelle (OWN), hybrid wing body (HWB), mid-fuselage nacelle (MFN)) concepts were developed. A set of propulsion and airframe technologies were defined and integrated onto these advanced concepts which were then sized to meet the baseline mission requirements. Block fuel burn performance was then estimated, resulting in reductions relative to the 2005 best-in-class baseline performance ranging from 39% to 49%. The advanced single-aisle and large twin aisle T+W concepts had reductions of 43% and 41%, respectively, relative to the 737-800 and 777-200LR aircraft. The single-aisle OWN concept and the large twin aisle class HWB concept had reductions of 45% and 47%, respectively. In addition to their estimated fuel burn reduction performance, these unconventional concepts have the potential to provide significant noise reductions due, in part, to engine shielding provided by the airframe. Finally, all of the advanced concepts also have the potential for significant NOx emissions reductions due to the use of advanced combustor technology. Noise and NOx emissions reduction estimates were also generated for these concepts as part of the ERA project.

  3. A Visual Model for the Variance and Standard Deviation

    ERIC Educational Resources Information Center

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  4. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    PubMed

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  5. Logistics Reduction Technologies for Exploration Missions

    NASA Technical Reports Server (NTRS)

    Broyan, James L., Jr.; Ewert, Michael K.; Fink, Patrick W.

    2014-01-01

    Human exploration missions under study are limited by the launch mass capacity of existing and planned launch vehicles. The logistical mass of crew items is typically considered separate from the vehicle structure, habitat outfitting, and life support systems. Although mass is typically the focus of exploration missions, due to its strong impact on launch vehicle and habitable volume for the crew, logistics volume also needs to be considered. NASA's Advanced Exploration Systems (AES) Logistics Reduction and Repurposing (LRR) Project is developing six logistics technologies guided by a systems engineering cradle-to-grave approach to enable after-use crew items to augment vehicle systems. Specifically, AES LRR is investigating the direct reduction of clothing mass, the repurposing of logistical packaging, the use of autonomous logistics management technologies, the processing of spent crew items to benefit radiation shielding and water recovery, and the conversion of trash to propulsion gases. Reduction of mass has a corresponding and significant impact to logistical volume. The reduction of logistical volume can reduce the overall pressurized vehicle mass directly, or indirectly benefit the mission by allowing for an increase in habitable volume during the mission. The systematic implementation of these types of technologies will increase launch mass efficiency by enabling items to be used for secondary purposes and improve the habitability of the vehicle as mission durations increase. Early studies have shown that the use of advanced logistics technologies can save approximately 20 m(sup 3) of volume during transit alone for a six-person Mars conjunction class mission.

  6. Efficient Scores, Variance Decompositions and Monte Carlo Swindles.

    DTIC Science & Technology

    1984-08-28

    to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem

  7. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  8. The Impact of Advanced Placement Courses on High School Students Taking the Scholastic Aptitude Test.

    ERIC Educational Resources Information Center

    Thomas, L. M.; Thomas, Suzanne G.

    This obtrusive post-hoc quasi-experimental study investigated Scholastic Assessment Test (SAT) scores of 111 high school students in grades 10 through 12. Fifty-three students were enrolled in at least one Advanced Placement (AP) course at the time of the study. General factorial analysis of variance (ANOVA) tested for significant differences…

  9. The human as a detector of changes in variance and bandwidth

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Govindaraj, T.

    1977-01-01

    The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.

  10. Variance in Math Achievement Attributable to Visual Cognitive Constructs

    ERIC Educational Resources Information Center

    Oehlert, Jeremy J.

    2012-01-01

    Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…

  11. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 5 2013-07-01 2013-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness...

  12. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness...

  13. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 5 2012-07-01 2012-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness...

  14. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF 1974...

  15. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 4 2014-04-01 2014-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF 1974...

  16. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 4 2012-04-01 2012-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF 1974...

  17. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 4 2013-04-01 2013-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF 1974...

  18. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF 1974...

  19. Supersonic jet shock noise reduction

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1984-01-01

    Shock-cell noise is identified to be a potentially significant problem for advanced supersonic aircraft at takeoff. Therefore NASA conducted fundamental studies of the phenomena involved and model-scale experiments aimed at developing means of noise reduction. The results of a series of studies conducted to determine means by which supersonic jet shock noise can be reduced to acceptable levels for advanced supersonic cruise aircraft are reviewed. Theoretical studies were conducted on the shock associated noise of supersonic jets from convergent-divergent (C-D) nozzles. Laboratory studies were conducted on the influence of narrowband shock screech on broadband noise and on means of screech reduction. The usefulness of C-D nozzle passages was investigated at model scale for single-stream and dual-stream nozzles. The effect of off-design pressure ratio was determined under static and simulated flight conditions for jet temperatures up to 960 K. Annular and coannular flow passages with center plugs and multi-element suppressor nozzles were evaluated, and the effect of plug tip geometry was established. In addition to the far-field acoustic data, mean and turbulent velocity distributions were measured with a laser velocimeter, and shadowgraph images of the flow field were obtained.

  20. Variance components estimation for continuous and discrete data, with emphasis on cross-classified sampling designs

    USGS Publications Warehouse

    Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).

  1. Multi-population Genomic Relationships for Estimating Current Genetic Variances Within and Genetic Correlations Between Populations.

    PubMed

    Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L

    2017-10-01

    Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.

  2. ESO Advanced Data Products for the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Retzlaff, J.; Delmotte, N.; Rite, C.; Rosati, P.; Slijkhuis, R.; Vandame, B.

    2006-07-01

    Advanced Data Products, that is, completely reduced, fully characterized science-ready data sets, play a crucial role for the success of the Virtual Observatory as a whole. We report on on-going work at ESO towards the creation and publication of Advanced Data Products in compliance with present VO standards on resource metadata. The new deep NIR multi-color mosaic of the GOODS/CDF-S region is used to showcase different aspects of the entire process: data reduction employing our MVM-based reduction pipeline, calibration and data characterization procedures, standardization of metadata content, and, finally, a prospect of the scientific potential illustrated by new results on deep galaxy number counts.

  3. A variation reduction allocation model for quality improvement to minimize investment and quality costs by considering suppliers’ learning curve

    NASA Astrophysics Data System (ADS)

    Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.

    2016-02-01

    Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.

  4. 77 FR 43083 - Federal Acquisition Regulation; Information Collection; Advance Payments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-23

    ...; Information Collection; Advance Payments AGENCIES: Department of Defense (DOD), General Services... Paperwork Reduction Act, the Regulatory Secretariat will be submitting to the Office of Management and... requirement concerning advance payments. Public comments are particularly invited on: Whether this collection...

  5. 76 FR 19003 - Land Disposal Restrictions: Nevada and California; Site Specific Treatment Variances for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... both a site-specific treatment variance to U.S. Ecology Nevada (USEN) located in Beatty, Nevada and... site-specific treatment variance to U.S. Ecology Nevada (USEN) located in Beatty, Nevada and withdraw... section of this document. II. Does this action apply to me? This proposal applies only to U. S. Ecology...

  6. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  7. Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods

    NASA Astrophysics Data System (ADS)

    Garbanzo-Salas, Marcial; Hocking, Wayne. K.

    2015-09-01

    In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.

  8. Aircraft engine pollution reduction.

    NASA Technical Reports Server (NTRS)

    Rudey, R. A.

    1972-01-01

    The effect of engine operation on the types and levels of the major aircraft engine pollutants is described and the major factors governing the formation of these pollutants during the burning of hydrocarbon fuel are discussed. Methods which are being explored to reduce these pollutants are discussed and their application to several experimental research programs are pointed out. Results showing significant reductions in the levels of carbon monoxide, unburned hydrocarbons, and oxides of nitrogen obtained from experimental combustion research programs are presented and discussed to point out potential application to aircraft engines. An experimental program designed to develop and demonstrate these and other advanced, low pollution combustor design methods is described. Results that have been obtained to date indicate considerable promise for reducing advanced engine exhaust pollutants to levels significantly below current engines.

  9. Neuroticism explains unwanted variance in Implicit Association Tests of personality: possible evidence for an affective valence confound.

    PubMed

    Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja

    2013-01-01

    Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.

  10. Structural changes and out-of-sample prediction of realized range-based variance in the stock market

    NASA Astrophysics Data System (ADS)

    Gong, Xu; Lin, Boqiang

    2018-03-01

    This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.

  11. A proxy for variance in dense matching over homogeneous terrain

    NASA Astrophysics Data System (ADS)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  12. Follow-On Technology Requirement Study for Advanced Subsonic Transport

    NASA Technical Reports Server (NTRS)

    Wendus, Bruce E.; Stark, Donald F.; Holler, Richard P.; Funkhouser, Merle E.

    2003-01-01

    A study was conducted to define and assess the critical or enabling technologies required for a year 2005 entry into service (EIS) engine for subsonic commercial aircraft, with NASA Advanced Subsonic Transport goals used as benchmarks. The year 2005 EIS advanced technology engine is an Advanced Ducted Propulsor (ADP) engine. Performance analysis showed that the ADP design offered many advantages compared to a baseline turbofan engine. An airplane/ engine simulation study using a long range quad aircraft quantified the effects of the ADP engine on the economics of typical airline operation. Results of the economic analysis show the ADP propulsion system provides a 6% reduction in direct operating cost plus interest, with half the reduction resulting from reduced fuel consumption. Critical and enabling technologies for the year 2005 EIS ADP were identified and prioritized.

  13. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  14. A Study of Child Variance, Volume 4: The Future; Conceptual Project in Emotional Disturbance.

    ERIC Educational Resources Information Center

    Rhodes, William C.

    Presented in the fourth volume in a series are a discussion of critical issues related to child variance and predictions for how society will perceive and respond to child variance in the future. Reviewed in an introductory chapter are the contents of the first three volumes which deal with conceptual models, interventions, and service delivery…

  15. Explaining Common Variance Shared by Early Numeracy and Literacy

    ERIC Educational Resources Information Center

    Davidse, N. J.; De Jong, M. T.; Bus, A. G.

    2014-01-01

    How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…

  16. Postsynaptic degeneration as revealed by PSD-95 reduction occurs after advanced Aβ and tau pathology in transgenic mouse models of Alzheimer's disease.

    PubMed

    Shao, Charles Y; Mirra, Suzanne S; Sait, Hameetha B R; Sacktor, Todd C; Sigurdsson, Einar M

    2011-09-01

    Impairment of synaptic plasticity underlies memory dysfunction in Alzheimer's disease (AD). Molecules involved in this plasticity such as PSD-95, a major postsynaptic scaffold protein at excitatory synapses, may play an important role in AD pathogenesis. We examined the distribution of PSD-95 in transgenic mice of amyloidopathy (5XFAD) and tauopathy (JNPL3) as well as in AD brains using double-labeling immunofluorescence and confocal microscopy. In wild type control mice, PSD-95 primarily labeled neuropil with distinct distribution in hippocampal apical dendrites. In 3-month-old 5XFAD mice, PSD-95 distribution was similar to that of wild type mice despite significant Aβ deposition. However, in 6-month-old 5XFAD mice, PSD-95 immunoreactivity in apical dendrites markedly decreased and prominent immunoreactivity was noted in neuronal soma in CA1 neurons. Similarly, PSD-95 immunoreactivity disappeared from apical dendrites and accumulated in neuronal soma in 14-month-old, but not in 3-month-old, JNPL3 mice. In AD brains, PSD-95 accumulated in Hirano bodies in hippocampal neurons. Our findings support the notion that either Aβ or tau can induce reduction of PSD-95 in excitatory synapses in hippocampus. Furthermore, this PSD-95 reduction is not an early event but occurs as the pathologies advance. Thus, the time-dependent PSD-95 reduction from synapses and accumulation in neuronal soma in transgenic mice and Hirano bodies in AD may mark postsynaptic degeneration that underlies long-term functional deficits.

  17. Influence function based variance estimation and missing data issues in case-cohort studies.

    PubMed

    Mark, S D; Katki, H

    2001-12-01

    Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.

  18. Rising Variance and Synchrony across Marine and Terrestrial Ecosystems of Western North America

    NASA Astrophysics Data System (ADS)

    Black, B.; Di Lorenzo, E.

    2016-12-01

    Along the west coast of North America, the strength of the wintertime North Pacific High (NPH) is closely linked to wintertime upwelling in the California Current, and thus to various upper-trophic indicators of ecosystem productivity including seabird breeding success, fish growth-increment chronologies, and a copepod community index. The wintertime NPH also affects the Pacific storm track and thus precipitation and tree growth on land. Over the past century, variance of the winter NPH has risen and coincided with rising variance of drought, river discharge, upwelling, and sea level indicators. Moreover, synchrony within and among these indicators has also increased, especially on land where synchrony is further magnified by long-term warming trends. Rising synchrony is also imprinted on biological time series, including rockfish chronologies and a network of moisture-sensitive blue oak chronologies, which have risen to their highest levels of synchrony in at least the past four centuries. The ultimate drivers of rising NPH variance appears to be rising variability in the El Niño Southern Oscillation and more efficient coupling between the tropical and extra-tropics. These results suggest that rising climate variance is increasingly limiting biological variability with implications for ecosystem resilience and degree of coupling across the land-sea interface.

  19. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

    USGS Publications Warehouse

    Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

    2011-01-01

    Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this

  20. 76 FR 18921 - Land Disposal Restrictions: Nevada and California; Site Specific Treatment Variances for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... final actions to both issue a site- specific treatment variance to U.S. Ecology Nevada (USEN) in Beatty... me? This action applies only to U.S. Ecology Nevada located in Beatty, Nevada and to Chemical Waste... This Variance A. U.S. Ecology Nevada Petition B. What Type and How Much Waste Will be Subject to This...

  1. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    NASA Astrophysics Data System (ADS)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  2. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  3. Advanced supersonic propulsion study, phase 3

    NASA Technical Reports Server (NTRS)

    Howlett, R. A.; Johnson, J.; Sabatella, J.; Sewall, T.

    1976-01-01

    The variable stream control engine is determined to be the most promising propulsion system concept for advanced supersonic cruise aircraft. This concept uses variable geometry components and a unique throttle schedule for independent control of two flow streams to provide low jet noise at takeoff and high performance at both subsonic and supersonic cruise. The advanced technology offers a 25% improvement in airplane range and an 8 decibel reduction in takeoff noise, relative to first generation supersonic turbojet engines.

  4. 40 CFR 268.44 - Variance from a treatment standard.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... than) the concentrations necessary to minimize short- and long-term threats to human health and the... any given treatment variance is sufficient to minimize threats to human health and the environment... Selenium NA NA 25 mg/L TCLP NA. U.S. Ecology Idaho, Incorporated, Grandview, Idaho K08810 Standards under...

  5. 40 CFR 268.44 - Variance from a treatment standard.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... than) the concentrations necessary to minimize short- and long-term threats to human health and the... any given treatment variance is sufficient to minimize threats to human health and the environment... Selenium NA NA 25 mg/L TCLP NA. U.S. Ecology Idaho, Incorporated, Grandview, Idaho K08810 Standards under...

  6. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  7. Technical and biological variance structure in mRNA-Seq data: life in the real world

    PubMed Central

    2012-01-01

    Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017

  8. Benefits of Integration of Aerojet Rocketdyne and RTI Advanced Gasification Technologies for Hydrogen-Rich Syngas Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Vijay; Denton, David; SHarma, Pradeep

    The key objective for this project was to evaluate the potential to achieve substantial reductions in the production cost of H 2-rich syngas via coal gasification with near-zero emissions due to the cumulative and synergistic benefits realized when multiple advanced technologies are integrated into the overall conversion process. In this project, Aerojet Rocketdyne’s (AR’s) advanced gasification technology (currently being offered as R-GAS™) and RTI International’s (RTI’s) advanced warm syngas cleanup technologies were evaluated via a number of comparative techno-economic case studies. AR’s advanced gasification technology consists of a dry solids pump and a compact gasifier system. Based on the uniquemore » design of this gasifier, it has been shown to reduce the capital cost of the gasification block by between 40 and 50%. At the start of this project, actual experimental work had been demonstrated through pilot plant systems for both the gasifier and dry solids pump. RTI’s advanced warm syngas cleanup technologies consist primarily of RTI’s Warm Gas Desulfurization Process (WDP) technology, which effectively allows decoupling of the sulfur and CO 2 removal allowing for more flexibility in the selection of the CO 2 removal technology, plus associated advanced technologies for direct sulfur recovery and water gas shift (WGS). WDP has been demonstrated at pre-commercial scale using an activated amine carbon dioxide recovery process which would not have been possible if a majority of the sulfur had not been removed from the syngas by WDP. This pre-commercial demonstration of RTI’s advanced warm syngas cleanup system was conducted in parallel to the activities on this project. The technical data and cost information from this pre-commercial demonstration were extensively used in this project during the techno-economic analysis. With this project, both of RTI’s advanced WGS technologies were investigated. Because RT’s advanced fixed-bed WGS

  9. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling.

    PubMed

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-12-07

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.

  10. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

    PubMed Central

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-01-01

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600

  11. The variance of the locally measured Hubble parameter explained with different estimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk

    We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less

  12. Subjective neighborhood assessment and physical inactivity: An examination of neighborhood-level variance.

    PubMed

    Prochaska, John D; Buschmann, Robert N; Jupiter, Daniel; Mutambudzi, Miriam; Peek, M Kristen

    2018-06-01

    Research suggests a linkage between perceptions of neighborhood quality and the likelihood of engaging in leisure-time physical activity. Often in these studies, intra-neighborhood variance is viewed as something to be controlled for statistically. However, we hypothesized that intra-neighborhood variance in perceptions of neighborhood quality may be contextually relevant. We examined the relationship between intra-neighborhood variance of subjective neighborhood quality and neighborhood-level reported physical inactivity across 48 neighborhoods within a medium-sized city, Texas City, Texas using survey data from 2706 residents collected between 2004 and 2006. Neighborhoods where the aggregated perception of neighborhood quality was poor also had a larger proportion of residents reporting being physically inactive. However, higher degrees of disagreement among residents within neighborhoods about their neighborhood quality was significantly associated with a lower proportion of residents reporting being physically inactive (p=0.001). Our results suggest that intra-neighborhood variability may be contextually relevant in studies seeking to better understand the relationship between neighborhood quality and behaviors sensitive to neighborhood environments, like physical activity. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. An advanced carbon reactor subsystem for carbon dioxide reduction

    NASA Technical Reports Server (NTRS)

    Noyes, Gary P.; Cusick, Robert J.

    1986-01-01

    An evaluation is presented of the development status of an advanced carbon-reactor subsystem (ACRS) for the production of water and dense, solid carbon from CO2 and hydrogen, as required in physiochemical air revitalization systems for long-duration manned space missions. The ACRS consists of a Sabatier Methanation Reactor (SMR) that reduces CO2 with hydrogen to form methane and water, a gas-liquid separator to remove product water from the methane, and a Carbon Formation Reactor (CFR) to pyrolize methane to carbon and hydrogen; the carbon is recycled to the SMR, while the produce carbon is periodically removed from the CFR. A preprototype ACRS under development for the NASA Space Station is described.

  14. The contribution of the mitochondrial genome to sex-specific fitness variance.

    PubMed

    Smith, Shane R T; Connallon, Tim

    2017-05-01

    Maternal inheritance of mitochondrial DNA (mtDNA) facilitates the evolutionary accumulation of mutations with sex-biased fitness effects. Whereas maternal inheritance closely aligns mtDNA evolution with natural selection in females, it makes it indifferent to evolutionary changes that exclusively benefit males. The constrained response of mtDNA to selection in males can lead to asymmetries in the relative contributions of mitochondrial genes to female versus male fitness variation. Here, we examine the impact of genetic drift and the distribution of fitness effects (DFE) among mutations-including the correlation of mutant fitness effects between the sexes-on mitochondrial genetic variation for fitness. We show how drift, genetic correlations, and skewness of the DFE determine the relative contributions of mitochondrial genes to male versus female fitness variance. When mutant fitness effects are weakly correlated between the sexes, and the effective population size is large, mitochondrial genes should contribute much more to male than to female fitness variance. In contrast, high fitness correlations and small population sizes tend to equalize the contributions of mitochondrial genes to female versus male variance. We discuss implications of these results for the evolution of mitochondrial genome diversity and the genetic architecture of female and male fitness. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  15. Shared genetic variance between obesity and white matter integrity in Mexican Americans.

    PubMed

    Spieker, Elena A; Kochunov, Peter; Rowland, Laura M; Sprooten, Emma; Winkler, Anderson M; Olvera, Rene L; Almasy, Laura; Duggirala, Ravi; Fox, Peter T; Blangero, John; Glahn, David C; Curran, Joanne E

    2015-01-01

    Obesity is a chronic metabolic disorder that may also lead to reduced white matter integrity, potentially due to shared genetic risk factors. Genetic correlation analyses were conducted in a large cohort of Mexican American families in San Antonio (N = 761, 58% females, ages 18-81 years; 41.3 ± 14.5) from the Genetics of Brain Structure and Function Study. Shared genetic variance was calculated between measures of adiposity [(body mass index (BMI; kg/m(2)) and waist circumference (WC; in)] and whole-brain and regional measurements of cerebral white matter integrity (fractional anisotropy). Whole-brain average and regional fractional anisotropy values for 10 major white matter tracts were calculated from high angular resolution diffusion tensor imaging data (DTI; 1.7 × 1.7 × 3 mm; 55 directions). Additive genetic factors explained intersubject variance in BMI (heritability, h (2) = 0.58), WC (h (2) = 0.57), and FA (h (2) = 0.49). FA shared significant portions of genetic variance with BMI in the genu (ρG = -0.25), body (ρG = -0.30), and splenium (ρG = -0.26) of the corpus callosum, internal capsule (ρG = -0.29), and thalamic radiation (ρG = -0.31) (all p's = 0.043). The strongest evidence of shared variance was between BMI/WC and FA in the superior fronto-occipital fasciculus (ρG = -0.39, p = 0.020; ρG = -0.39, p = 0.030), which highlights region-specific variation in neural correlates of obesity. This may suggest that increase in obesity and reduced white matter integrity share common genetic risk factors.

  16. Great Expectations in the Joint Advanced Manufacturing Region

    DTIC Science & Technology

    2016-12-01

    would be continuous experimentation and risk reduction prototyping. The entire manufacturing life cycle— design , testing, product development...on the back of a napkin, they decided to call their effort the Joint Advanced Manufacturing Region (JAMR) and manage it as an Integrated Product ... designed to support the continuous experimentation of advanced manufacturing tactics, tech- niques and procedures under actual operational or combat

  17. Heterogeneous variances in multi-environment yield trials for corn hybrids

    USDA-ARS?s Scientific Manuscript database

    Recent developments in statistics and computing have enabled much greater levels of complexity in statistical models of multi-environment yield trial data. One particular feature of interest to breeders is simultaneously modeling heterogeneity of variances among environments and cultivars. Our obj...

  18. Overview of Advanced Turbine Systems Program

    NASA Astrophysics Data System (ADS)

    Webb, H. A.; Bajura, R. A.

    The US Department of Energy initiated a program to develop advanced gas turbine systems to serve both central power and industrial power generation markets. The Advanced Turbine Systems (ATS) Program will lead to commercial offerings by the private sector by 2002. ATS will be developed to fire natural gas but will be adaptable to coal and biomass firing. The systems will be: highly efficient (15 percent improvement over today's best systems); environmentally superior (10 percent reduction in nitrogen oxides over today's best systems); and cost competitive (10 percent reduction in cost of electricity). The ATS Program has five elements. Innovative cycle development will lead to the demonstration of systems with advanced gas turbine cycles using current gas turbine technology. High temperature development will lead to the increased firing temperatures needed to achieve ATS Program efficiency goals. Ceramic component development/demonstration will expand the current DOE/CE program to demonstrate industrial-scale turbines with ceramic components. Technology base will support the overall program by conducting research and development (R&D) on generic technology issues. Coal application studies will adapt technology developed in the ATS program to coal-fired systems being developed in other DOE programs.

  19. 40 CFR 142.303 - Which size public water systems can receive a small system variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Which size public water systems can receive a small system variance? 142.303 Section 142.303 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General...

  20. 40 CFR 142.303 - Which size public water systems can receive a small system variance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Which size public water systems can receive a small system variance? 142.303 Section 142.303 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General...