Sample records for driveway mean-variance optimization

  1. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  2. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    NASA Astrophysics Data System (ADS)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  3. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  4. Optimal control of LQG problem with an explicit trade-off between mean and variance

    NASA Astrophysics Data System (ADS)

    Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang

    2011-12-01

    For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.

  5. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    NASA Astrophysics Data System (ADS)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  6. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  7. Mean-variance model for portfolio optimization with background risk based on uncertainty theory

    NASA Astrophysics Data System (ADS)

    Zhai, Jia; Bai, Manying

    2018-04-01

    The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.

  8. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  9. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  10. Quantifying Safety Performance of Driveways on State Highways

    DOT National Transportation Integrated Search

    2012-08-01

    This report documents a research effort to quantify the safety performance of driveways in the State of Oregon. In : particular, this research effort focuses on driveways located adjacent to principal arterial state highways with urban or : rural des...

  11. Improved business driveway delineation in urban work zones.

    DOT National Transportation Integrated Search

    2015-04-01

    This report documents the efforts and results of a two-year research project aimed at improving driveway : delineation in work zones. The first year of the project included a closed-course study to identify the most : promising driveway delineation a...

  12. 9 CFR 313.1 - Livestock pens, driveways and ramps.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Livestock pens, driveways and ramps shall be maintained in good repair. They shall be free from sharp or... acceptable construction and maintenance. (c) U.S. Suspects (as defined in § 301.2(xxx)) and dying, diseased... awaiting disposition by the inspector. (d) Livestock pens and driveways shall be so arranged that sharp...

  13. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    PubMed

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  14. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    PubMed Central

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  15. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  16. The Development and Preliminary Evaluation of an Education Intervention to Prevent Driveway Run-Over Incidents

    ERIC Educational Resources Information Center

    Armstrong, Kerry A.; Watling, Hanna; Davey, Jeremy

    2016-01-01

    Objective: While driveway run-over incidents continue to be a cause of serious injury and deaths among young children in Australia, few empirically evaluated educational interventions have been developed which address these incidents. Addressing this gap, this study describes the development and evaluation of a paper-based driveway safety…

  17. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section....15 Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly... departments where certified products are prepared, handled, or stored. Docks and areas where cars and vehicles...

  18. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section....15 Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly... departments where certified products are prepared, handled, or stored. Docks and areas where cars and vehicles...

  19. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section....15 Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly... departments where certified products are prepared, handled, or stored. Docks and areas where cars and vehicles...

  20. Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharroubi, Idris, E-mail: kharroubi@ceremade.dauphine.fr; Lim, Thomas, E-mail: lim@ensiie.fr; Ngoupeyou, Armand, E-mail: armand.ngoupeyou@univ-paris-diderot.fr

    2013-12-15

    In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.

  1. Means and Variances without Calculus

    ERIC Educational Resources Information Center

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  2. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  3. Photocopy of original blackandwhite silver gelatin print, TWELFTH STREET DRIVEWAY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of original black-and-white silver gelatin print, TWELFTH STREET DRIVEWAY ENTRANCE, August 31, 1929, photographer Commercial Photo Company - Internal Revenue Service Headquarters Building, 1111 Constitution Avenue Northwest, Washington, District of Columbia, DC

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  5. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    PubMed

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  6. 7. ELEVATION OF STREET (NORTH) FACADE FROM DRIVEWAY OF LOWELL'S ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. ELEVATION OF STREET (NORTH) FACADE FROM DRIVEWAY OF LOWELL'S FORMER RESIDENCE. NOTE BUILDERS VERTICALLY ALIGNED STEM OF BOATS WITH CORNER OF HOUSE BEHIND CAMERA POSITION. - Lowell's Boat Shop, 459 Main Street, Amesbury, Essex County, MA

  7. FACILITY 89. FRONT OBLIQUE TAKEN FROM DRIVEWAY. VIEW FACING NORTHEAST. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FACILITY 89. FRONT OBLIQUE TAKEN FROM DRIVEWAY. VIEW FACING NORTHEAST. - U.S. Naval Base, Pearl Harbor, Naval Housing Area Makalapa, Junior Officers' Quarters Type K, Makin Place, & Halawa, Makalapa, & Midway Drives, Pearl City, Honolulu County, HI

  8. Evaluation of Mean and Variance Integrals without Integration

    ERIC Educational Resources Information Center

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  9. A Mean variance analysis of arbitrage portfolios

    NASA Astrophysics Data System (ADS)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  10. LOOKING NORTH ALONG THE DRIVEWAY OF THE SCHEETZ PROPERTY SHOWING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOOKING NORTH ALONG THE DRIVEWAY OF THE SCHEETZ PROPERTY SHOWING SOUTHWEST AND SOUTHEAST ELEVATIONS OF SCHEETZ HOUSE; BUTTONWOOD TREE TO LEFT STOOD AT ONE CORNER OF THE MILL (BURNED 1929). - Scheetz Farm, 7161 Camp Hill Road, Fort Washington, Montgomery County, PA

  11. Policy on Street and Driveway Access to North Carolina Highways

    DOT National Transportation Integrated Search

    2003-07-01

    The primary concern of those responsible for North Carolina's vast highway system is to provide for the safe and efficient movement of people and goods. As an aid in acheiving this goal, this manual sets forth the Policy on Street and Driveway Access...

  12. Full-Depth Asphalt Pavements for Parking Lots and Driveways.

    ERIC Educational Resources Information Center

    Asphalt Inst., College Park, MD.

    The latest information for designing full-depth asphalt pavements for parking lots and driveways is covered in relationship to the continued increase in vehicle registration. It is based on The Asphalt Institute's Thickness Design Manual, Series No. 1 (MS-1), Seventh Edition, which covers all aspects of asphalt pavement thickness design in detail,…

  13. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    PubMed

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  14. Evaluation of costs to process and manage utility and driveway permits.

    DOT National Transportation Integrated Search

    2014-10-01

    Reviewing and processing utility and driveway permits at the Texas Department of Transportation (TxDOT) : requires a considerable amount of involvement and coordination by TxDOT personnel, both at the district : and division levels. Currently, TxDOT ...

  15. Mean-variance portfolio selection for defined-contribution pension funds with stochastic salary.

    PubMed

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  16. FRONT ELEVATION, WITH DRIVEWAY ON LEFT HAND SIDE, AND STREET ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FRONT ELEVATION, WITH DRIVEWAY ON LEFT HAND SIDE, AND STREET IN FOREGROUND. VIEW FACING NORTHEAST - Camp H.M. Smith and Navy Public Works Center Manana Title VII (Capehart) Housing, Four-Bedroom, Single-Family Type 10, Birch Circle, Elm Drive, Elm Circle, and Date Drive, Pearl City, Honolulu County, HI

  17. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    PubMed Central

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier. PMID:24782667

  18. FRONT (LEFT SIDE) OBLIQUE OF HOUSE, WITH DRIVEWAY IN THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FRONT (LEFT SIDE) OBLIQUE OF HOUSE, WITH DRIVEWAY IN THE FOREGROUND. VIEW FACING NORTHEAST - Camp H.M. Smith and Navy Public Works Center Manana Title VII (Capehart) Housing, Three-Bedroom Single-Family Types 8 and 11, Birch Circle, Elm Drive, Elm Circle, and Date Drive, Pearl City, Honolulu County, HI

  19. On the Endogeneity of the Mean-Variance Efficient Frontier.

    ERIC Educational Resources Information Center

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  20. Risk-sensitivity and the mean-variance trade-off: decision making in sensorimotor control

    PubMed Central

    Nagengast, Arne J.; Braun, Daniel A.; Wolpert, Daniel M.

    2011-01-01

    Numerous psychophysical studies suggest that the sensorimotor system chooses actions that optimize the average cost associated with a movement. Recently, however, violations of this hypothesis have been reported in line with economic theories of decision-making that not only consider the mean payoff, but are also sensitive to risk, that is the variability of the payoff. Here, we examine the hypothesis that risk-sensitivity in sensorimotor control arises as a mean-variance trade-off in movement costs. We designed a motor task in which participants could choose between a sure motor action that resulted in a fixed amount of effort and a risky motor action that resulted in a variable amount of effort that could be either lower or higher than the fixed effort. By changing the mean effort of the risky action while experimentally fixing its variance, we determined indifference points at which participants chose equiprobably between the sure, fixed amount of effort option and the risky, variable effort option. Depending on whether participants accepted a variable effort with a mean that was higher, lower or equal to the fixed effort, they could be classified as risk-seeking, risk-averse or risk-neutral. Most subjects were risk-sensitive in our task consistent with a mean-variance trade-off in effort, thereby, underlining the importance of risk-sensitivity in computational models of sensorimotor control. PMID:21208966

  1. Analytic solution to variance optimization with no short positions

    NASA Astrophysics Data System (ADS)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  2. The mean and variance of phylogenetic diversity under rarefaction

    PubMed Central

    Matsen, Frederick A.

    2013-01-01

    Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701

  3. The mean and variance of phylogenetic diversity under rarefaction.

    PubMed

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  4. A Versatile Omnibus Test for Detecting Mean and Variance Heterogeneity

    PubMed Central

    Bailey, Matthew; Kauwe, John S. K.; Maxwell, Taylor J.

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (GxG), or gene-by-environment (GxE) interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRTMV) or either effect alone (LRTM or LRTV) in the presence of covariates. Using extensive simulations for our method and others we found that all parametric tests were sensitive to non-normality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant we demonstrate how linkage disequilibrium (LD) can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D’ and relatively low r2 values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect gene-by-gene interactions and also how vQTL are related to relationship loci (rQTL) and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait. PMID:24482837

  5. DRAWING R100131, COMPANY OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DRAWING R-1001-31, COMPANY OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, LAS LOMAS AND BUENA VISTA DRIVES. Ink on linen, signed by H.B. Nurse. Date has been erased, but probably June 15, 1933. Also marked "PWC 104288." - Hamilton Field, East of Nave Drive, Novato, Marin County, CA

  6. DRAWING R100132, FIELD OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DRAWING R-1001-32, FIELD OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, SOUTH CIRCLE, CASA GRANDE REAL, AND SEQUOIA DRIVES. Ink on linen, signed by H.B. Nurse. Date has been erased, but probably June 15, 1933. Also marked "PWC 104289." - Hamilton Field, East of Nave Drive, Novato, Marin County, CA

  7. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  8. Research on regularized mean-variance portfolio selection strategy with modified Roy safety-first principle.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan

    2016-01-01

    We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.

  9. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    ERIC Educational Resources Information Center

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  10. Variance-Stable R-Estimators.

    DTIC Science & Technology

    1984-05-01

    By means of the concept of change-of variance function we investigate the stability properties of the asymptotic variance of R-estimators. This allows us to construct the optimal V-robust R-estimator that minimizes the asymptotic variance at the model, under the side condition of a bounded change-of variance function. Finally, we discuss the connection between this function and an influence function for two-sample rank tests introduced by Eplett (1980). (Author)

  11. Origin and Consequences of the Relationship between Protein Mean and Variance

    PubMed Central

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome. PMID:25062021

  12. Computation of mean and variance of the radiotherapy dose for PCA-modeled random shape and position variations of the target.

    PubMed

    Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M

    2014-01-20

    Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.

  13. Estimating means and variances: The comparative efficiency of composite and grab samples.

    PubMed

    Brumelle, S; Nemetz, P; Casey, D

    1984-03-01

    This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.

  14. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  15. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section... premises of every inspected plant shall be kept in clean and orderly condition. All catchbasins on the...

  16. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section... premises of every inspected plant shall be kept in clean and orderly condition. All catchbasins on the...

  17. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  18. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed Central

    Patlak, J B

    1993-01-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  19. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    PubMed

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  20. A Nonparametric Mean-Variance Smoothing Method to Assess Arabidopsis Cold Stress Transcriptional Regulator CBF2 Overexpression Microarray Data

    PubMed Central

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request. PMID:21611181

  1. LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints

    NASA Technical Reports Server (NTRS)

    Swei, Sean S.M.; Ayoubi, Mohammad A.

    2017-01-01

    This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.

  2. 9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be identified...

  3. 9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be identified...

  4. 9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be identified...

  5. On the mean and variance of the writhe of random polygons.

    PubMed

    Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.

  6. On the mean and variance of the writhe of random polygons

    PubMed Central

    Portillo, J.; Diao, Y.; Scharein, R.; Arsuaga, J.; Vazquez, M.

    2013-01-01

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an “ideal” conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n) behaves as a linear function of the length of the equilateral random polygon. PMID:25685182

  7. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    PubMed

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  8. Portfolio optimization with skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  9. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    PubMed

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  10. Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints

    NASA Astrophysics Data System (ADS)

    Yan, Wei

    2012-01-01

    An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.

  11. Intelligent ensemble T-S fuzzy neural networks with RCDPSO_DM optimization for effective handling of complex clinical pathway variances.

    PubMed

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2013-07-01

    Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  13. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    PubMed

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  15. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment

    PubMed Central

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-01-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in Supplementary materials. The application of these files can be generalized to variety of communities interested in investing on PV systems. PMID:26937458

  16. On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.

    PubMed

    Koyama, Shinsuke

    2015-07-01

    We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.

  17. flowVS: channel-specific variance stabilization in flow cytometry.

    PubMed

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with

  18. Is There a Common Summary Statistical Process for Representing the Mean and Variance? A Study Using Illustrations of Familiar Items.

    PubMed

    Yang, Yi; Tokita, Midori; Ishiguchi, Akira

    2018-01-01

    A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.

  19. Is There a Common Summary Statistical Process for Representing the Mean and Variance? A Study Using Illustrations of Familiar Items

    PubMed Central

    Yang, Yi; Tokita, Midori; Ishiguchi, Akira

    2018-01-01

    A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed. PMID:29399318

  20. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics.

    PubMed

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  1. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  2. A simple and exploratory way to determine the mean-variance relationship in generalized linear models.

    PubMed

    Tsou, Tsung-Shan

    2007-03-30

    This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.

  3. Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Ishii, Hiroaki

    2009-01-01

    This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.

  4. Estimating synaptic parameters from mean, variance, and covariance in trains of synaptic responses.

    PubMed

    Scheuss, V; Neher, E

    2001-10-01

    Fluctuation analysis of synaptic transmission using the variance-mean approach has been restricted in the past to steady-state responses. Here we extend this method to short repetitive trains of synaptic responses, during which the response amplitudes are not stationary. We consider intervals between trains, long enough so that the system is in the same average state at the beginning of each train. This allows analysis of ensemble means and variances for each response in a train separately. Thus, modifications in synaptic efficacy during short-term plasticity can be attributed to changes in synaptic parameters. In addition, we provide practical guidelines for the analysis of the covariance between successive responses in trains. Explicit algorithms to estimate synaptic parameters are derived and tested by Monte Carlo simulations on the basis of a binomial model of synaptic transmission, allowing for quantal variability, heterogeneity in the release probability, and postsynaptic receptor saturation and desensitization. We find that the combined analysis of variance and covariance is advantageous in yielding an estimate for the number of release sites, which is independent of heterogeneity in the release probability under certain conditions. Furthermore, it allows one to calculate the apparent quantal size for each response in a sequence of stimuli.

  5. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures

  6. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used

  7. Read-noise characterization of focal plane array detectors via mean-variance analysis.

    PubMed

    Sperline, R P; Knight, A K; Gresham, C A; Koppenaal, D W; Hieftje, G M; Denton, M B

    2005-11-01

    Mean-variance analysis is described as a method for characterization of the read-noise and gain of focal plane array (FPA) detectors, including charge-coupled devices (CCDs), charge-injection devices (CIDs), and complementary metal-oxide-semiconductor (CMOS) multiplexers (infrared arrays). Practical FPA detector characterization is outlined. The nondestructive readout capability available in some CIDs and FPA devices is discussed as a means for signal-to-noise ratio improvement. Derivations of the equations are fully presented to unify understanding of this method by the spectroscopic community.

  8. The dynamics of integrate-and-fire: mean versus variance modulations and dependence on baseline parameters.

    PubMed

    Pressley, Joanna; Troyer, Todd W

    2011-05-01

    The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.

  9. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  10. Spatiotemporal characterization of Ensemble Prediction Systems - the Mean-Variance of Logarithms (MVL) diagram

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Primo, C.; Rodríguez, M. A.; Fernández, J.

    2008-02-01

    We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms) diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.

  11. Gender differences in variance and means on the Naglieri Non-verbal Ability Test: data from the Philippines.

    PubMed

    Vista, Alvin; Care, Esther

    2011-06-01

    Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public school students from the Philippines. More than 2,700 sixth graders from public schools across the country were tested with the Naglieri Non-verbal Ability Test (NNAT). Variance ratios (VRs) and log-transformed VRs were computed. Proportion ratios for each of the ability levels were also calculated and a chi-square goodness-of-fit test was performed. An analysis of variance was performed to determine the overall gender difference in mean scores as well as within each of three age subgroups. Our data show non-existent or trivial gender difference in mean scores. However, the tails of the distributions show differences between the males and females, with greater variability among males in the upper half of the distribution and greater variability among females in the lower half of the distribution. Descriptions of the results and their implications are discussed. Results on mean score differences support the hypothesis that there are no significant gender differences in cognitive ability. The unusual results regarding differences in variance and the male-female proportion in the tails require more complex investigations. ©2010 The British Psychological Society.

  12. Applications of polynomial optimization in financial risk investment

    NASA Astrophysics Data System (ADS)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  13. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  14. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    PubMed

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  16. Variance adaptation in navigational decision making

    NASA Astrophysics Data System (ADS)

    Gershow, Marc; Gepner, Ruben; Wolk, Jason; Wadekar, Digvijay

    Drosophila larvae navigate their environments using a biased random walk strategy. A key component of this strategy is the decision to initiate a turn (change direction) in response to declining conditions. We modeled this decision as the output of a Linear-Nonlinear-Poisson cascade and used reverse correlation with visual and fictive olfactory stimuli to find the parameters of this model. Because the larva responds to changes in stimulus intensity, we used stimuli with uncorrelated normally distributed intensity derivatives, i.e. Brownian processes, and took the stimulus derivative as the input to our LNP cascade. In this way, we were able to present stimuli with 0 mean and controlled variance. We found that the nonlinear rate function depended on the variance in the stimulus input, allowing larvae to respond more strongly to small changes in low-noise compared to high-noise environments. We measured the rate at which the larva adapted its behavior following changes in stimulus variance, and found that larvae adapted more quickly to increases in variance than to decreases, consistent with the behavior of an optimal Bayes estimator. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  17. Contrast discrimination: Second responses reveal the relationship between the mean and variance of visual signals

    PubMed Central

    Solomon, Joshua A.

    2007-01-01

    To explain the relationship between first- and second-response accuracies in a detection experiment, Swets, Tanner, and Birdsall [Swets, J., Tanner, W. P., Jr., & Birdsall, T. G. (1961). Decision processes in perception. Psychological Review, 68, 301–340] proposed that the variance of visual signals increased with their means. However, both a low threshold and intrinsic uncertainty produce similar relationships. I measured the relationship between first- and second-response accuracies for suprathreshold contrast discrimination, which is thought to be unaffected by sensory thresholds and intrinsic uncertainty. The results are consistent with a slowly increasing variance. PMID:17961625

  18. Optimizing occupational exposure measurement strategies when estimating the log-scale arithmetic mean value--an example from the reinforced plastics industry.

    PubMed

    Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A

    2006-06-01

    When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.

  19. The mean-variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI.

    PubMed

    Thompson, William H; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.

  20. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  1. The neutron-gamma Feynman variance to mean approach: Gamma detection and total neutron-gamma detection (theory and practice)

    NASA Astrophysics Data System (ADS)

    Chernikova, Dina; Axell, Kåre; Avdic, Senada; Pázsit, Imre; Nordlund, Anders; Allard, Stefan

    2015-05-01

    Two versions of the neutron-gamma variance to mean (Feynman-alpha method or Feynman-Y function) formula for either gamma detection only or total neutron-gamma detection, respectively, are derived and compared in this paper. The new formulas have particular importance for detectors of either gamma photons or detectors sensitive to both neutron and gamma radiation. If applied to a plastic or liquid scintillation detector, the total neutron-gamma detection Feynman-Y expression corresponds to a situation where no discrimination is made between neutrons and gamma particles. The gamma variance to mean formulas are useful when a detector of only gamma radiation is used or when working with a combined neutron-gamma detector at high count rates. The theoretical derivation is based on the Chapman-Kolmogorov equation with the inclusion of general reactions and corresponding intensities for neutrons and gammas, but with the inclusion of prompt reactions only. A one energy group approximation is considered. The comparison of the two different theories is made by using reaction intensities obtained in MCNPX simulations with a simplified geometry for two scintillation detectors and a 252Cf-source. In addition, the variance to mean ratios, neutron, gamma and total neutron-gamma are evaluated experimentally for a weak 252Cf neutron-gamma source, a 137Cs random gamma source and a 22Na correlated gamma source. Due to the focus being on the possibility of using neutron-gamma variance to mean theories for both reactor and safeguards applications, we limited the present study to the general analytical expressions for Feynman-alpha formulas.

  2. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  3. The variance of length of stay and the optimal DRG outlier payments.

    PubMed

    Felder, Stefan

    2009-09-01

    Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.

  4. Assessing differential gene expression with small sample sizes in oligonucleotide arrays using a mean-variance model.

    PubMed

    Hu, Jianhua; Wright, Fred A

    2007-03-01

    The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.

  5. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Calibrating nadir striped artifacts in a multibeam backscatter image using the equal mean-variance fitting model

    NASA Astrophysics Data System (ADS)

    Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue

    2017-07-01

    Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.

  7. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  8. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  9. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    PubMed

    Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  10. The Inheritance of Metabolic Flux: Expressions for the within-Sibship Mean and Variance Given the Parental Genotypes

    PubMed Central

    Ward, P. J.

    1990-01-01

    Recent developments have related quantitative trait expression to metabolic flux. The present paper investigates some implications of this for statistical aspects of polygenic inheritance. Expressions are derived for the within-sibship genetic mean and genetic variance of metabolic flux given a pair of parental, diploid, n-locus genotypes. These are exact and hold for arbitrary numbers of gene loci, arbitrary allelic values at each locus, and for arbitrary recombination fractions between adjacent gene loci. The within-sibship, genetic variance is seen to be simply a measure of parental heterozygosity plus a measure of the degree of linkage coupling within the parental genotypes. Approximations are given for the within-sibship phenotypic mean and variance of metabolic flux. These results are applied to the problem of attaining adequate statistical power in a test of association between allozymic variation and inter-individual variation in metabolic flux. Simulations indicate that statistical power can be greatly increased by augmenting the data with predictions and observations on progeny statistics in relation to parental allozyme genotypes. Adequate power may thus be attainable at small sample sizes, and when allozymic variation is scored at a only small fraction of the total set of loci whose catalytic products determine the flux. PMID:2379825

  11. VARIANCE ANISOTROPY IN KINETIC PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Tulasi N.; Matthaeus, William H.; Oughton, Sean

    Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solarmore » wind observations.« less

  12. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    PubMed

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  13. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  14. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model

    PubMed Central

    Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor’s behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign. PMID:27351482

  15. Multilevel covariance regression with correlated random effects in the mean and variance structure.

    PubMed

    Quintero, Adrian; Lesaffre, Emmanuel

    2017-09-01

    Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    PubMed

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  17. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    PubMed

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  18. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance

    PubMed Central

    Poplová, Michaela; Sovka, Pavel

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207

  19. Gender Differences in Variance and Means on the Naglieri Non-Verbal Ability Test: Data from the Philippines

    ERIC Educational Resources Information Center

    Vista, Alvin; Care, Esther

    2011-01-01

    Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…

  20. Power and Sample Size Calculations for Testing Linear Combinations of Group Means under Variance Heterogeneity with Applications to Meta and Moderation Analyses

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2015-01-01

    The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…

  1. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  2. Heritable Environmental Variance Causes Nonlinear Relationships Between Traits: Application to Birth Weight and Stillbirth of Pigs

    PubMed Central

    Mulder, Herman A.; Hill, William G.; Knol, Egbert F.

    2015-01-01

    There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of other traits, however. A genetic covariance between these is expected to lead to nonlinearity between them, for example between birth weight and survival of piglets, where animals of extreme weights have lower survival. The objectives were to derive this nonlinear relationship analytically using multiple regression and apply it to data on piglet birth weight and survival. This study provides a framework to study such nonlinear relationships caused by genetic covariance of environmental variance of one trait and the mean of the other. It is shown that positions of phenotypic and genetic optima may differ and that genetic relationships are likely to be more curvilinear than phenotypic relationships, dependent mainly on the environmental correlation between these traits. Genetic correlations may change if the population means change relative to the optimal phenotypes. Data of piglet birth weight and survival show that the presence of nonlinearity can be partly explained by the genetic covariance between environmental variance of birth weight and survival. The framework developed can be used to assess effects of artificial and natural selection on means and variances of traits and the statistical method presented can be used to estimate trade-offs between environmental variance of one trait and mean levels of others. PMID:25631318

  3. Approximate Sample Size Formulas for Testing Group Mean Differences when Variances Are Unequal in One-Way ANOVA

    ERIC Educational Resources Information Center

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2008-01-01

    This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…

  4. A Cosmic Variance Cookbook

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter

    2011-04-01

    Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is

  5. Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.

    PubMed

    Dexter, Franklin; Ledolter, Johannes

    2003-07-01

    Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.

  6. A flexible model for the mean and variance functions, with application to medical cost data.

    PubMed

    Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T

    2013-10-30

    Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.

  7. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  8. Going beyond the Mean: Using Variances to Enhance Understanding of the Impact of Educational Interventions for Multilevel Models

    ERIC Educational Resources Information Center

    Peralta, Yadira; Moreno, Mario; Harwell, Michael; Guzey, S. Selcen; Moore, Tamara J.

    2018-01-01

    Variance heterogeneity is a common feature of educational data when treatment differences expressed through means are present, and often reflects a treatment by subject interaction with respect to an outcome variable. Identifying variables that account for this interaction can enhance understanding of whom a treatment does and does not benefit in…

  9. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  10. Mass fluctuation kinetics: Capturing stochastic effects in systems of chemical reactions through coupled mean-variance computations

    NASA Astrophysics Data System (ADS)

    Gómez-Uribe, Carlos A.; Verghese, George C.

    2007-01-01

    The intrinsic stochastic effects in chemical reactions, and particularly in biochemical networks, may result in behaviors significantly different from those predicted by deterministic mass action kinetics (MAK). Analyzing stochastic effects, however, is often computationally taxing and complex. The authors describe here the derivation and application of what they term the mass fluctuation kinetics (MFK), a set of deterministic equations to track the means, variances, and covariances of the concentrations of the chemical species in the system. These equations are obtained by approximating the dynamics of the first and second moments of the chemical master equation. Apart from needing knowledge of the system volume, the MFK description requires only the same information used to specify the MAK model, and is not significantly harder to write down or apply. When the effects of fluctuations are negligible, the MFK description typically reduces to MAK. The MFK equations are capable of describing the average behavior of the network substantially better than MAK, because they incorporate the effects of fluctuations on the evolution of the means. They also account for the effects of the means on the evolution of the variances and covariances, to produce quite accurate uncertainty bands around the average behavior. The MFK computations, although approximate, are significantly faster than Monte Carlo methods for computing first and second moments in systems of chemical reactions. They may therefore be used, perhaps along with a few Monte Carlo simulations of sample state trajectories, to efficiently provide a detailed picture of the behavior of a chemical system.

  11. The effect of model uncertainty on some optimal routing problems

    NASA Technical Reports Server (NTRS)

    Mohanty, Bibhu; Cassandras, Christos G.

    1991-01-01

    The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.

  12. Why risk is not variance: an expository note.

    PubMed

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  13. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    NASA Astrophysics Data System (ADS)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  14. Variance to mean ratio, R(t), for poisson processes on phylogenetic trees.

    PubMed

    Goldman, N

    1994-09-01

    The ratio of expected variance to mean, R(t), of numbers of DNA base substitutions for contemporary sequences related by a "star" phylogeny is widely seen as a measure of the adherence of the sequences' evolution to a Poisson process with a molecular clock, as predicted by the "neutral theory" of molecular evolution under certain conditions. A number of estimators of R(t) have been proposed, all predicted to have mean 1 and distributions based on the chi 2. Various genes have previously been analyzed and found to have values of R(t) far in excess of 1, calling into question important aspects of the neutral theory. In this paper, I use Monte Carlo simulation to show that the previously suggested means and distributions of estimators of R(t) are highly inaccurate. The analysis is applied to star phylogenies and to general phylogenetic trees, and well-known gene sequences are reanalyzed. For star phylogenies the results show that Kimura's estimators ("The Neutral Theory of Molecular Evolution," Cambridge Univ. Press, Cambridge, 1983) are unsatisfactory for statistical testing of R(t), but confirm the accuracy of Bulmer's correction factor (Genetics 123: 615-619, 1989). For all three nonstar phylogenies studied, attained values of all three estimators of R(t), although larger than 1, are within their true confidence limits under simple Poisson process models. This shows that lineage effects can be responsible for high estimates of R(t), restoring some limited confidence in the molecular clock and showing that the distinction between lineage and molecular clock effects is vital.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    USDA-ARS?s Scientific Manuscript database

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  16. Multi-objective Optimization of Solar Irradiance and Variance at Pertinent Inclination Angles

    NASA Astrophysics Data System (ADS)

    Jain, Dhanesh; Lalwani, Mahendra

    2018-05-01

    The performance of photovoltaic panel gets highly affected bychange in atmospheric conditions and angle of inclination. This article evaluates the optimum tilt angle and orientation angle (surface azimuth angle) for solar photovoltaic array in order to get maximum solar irradiance and to reduce variance of radiation at different sets or subsets of time periods. Non-linear regression and adaptive neural fuzzy interference system (ANFIS) methods are used for predicting the solar radiation. The results of ANFIS are more accurate in comparison to non-linear regression. These results are further used for evaluating the correlation and applied for estimating the optimum combination of tilt angle and orientation angle with the help of general algebraic modelling system and multi-objective genetic algorithm. The hourly average solar irradiation is calculated at different combinations of tilt angle and orientation angle with the help of horizontal surface radiation data of Jodhpur (Rajasthan, India). The hourly average solar irradiance is calculated for three cases: zero variance, with actual variance and with double variance at different time scenarios. It is concluded that monthly collected solar radiation produces better result as compared to bimonthly, seasonally, half-yearly and yearly collected solar radiation. The profit obtained for monthly varying angle has 4.6% more with zero variance and 3.8% more with actual variance, than the annually fixed angle.

  17. Robust optimization of supersonic ORC nozzle guide vanes

    NASA Astrophysics Data System (ADS)

    Bufi, Elio A.; Cinnella, Paola

    2017-03-01

    An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.

  18. Hybrid computer optimization of systems with random parameters

    NASA Technical Reports Server (NTRS)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  19. Inverse Optimization: A New Perspective on the Black-Litterman Model.

    PubMed

    Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch

    2012-12-11

    The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.

  20. Inverse Optimization: A New Perspective on the Black-Litterman Model

    PubMed Central

    Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.

    2014-01-01

    The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873

  1. The importance of personality and parental styles on optimism in adolescents.

    PubMed

    Zanon, Cristian; Bastianello, Micheline Roat; Pacico, Juliana Cerentini; Hutz, Claudio Simon

    2014-01-01

    Some studies have suggested that personality factors are important to optimism development. Others have emphasized that family relations are relevant variables to optimism. This study aimed to evaluate the importance of parenting styles to optimism controlling for the variance accounted for by personality factors. Participants were 344 Brazilian high school students (44% male) with mean age of 16.2 years (SD = 1) who answered personality, optimism, responsiveness and demandingness scales. Hierarchical regression analyses were conducted having personality factors (in the first step) and maternal and paternal parenting styles, and demandingness and responsiveness (in the second step) as predictive variables and optimism as the criterion. Personality factors, especially neuroticism (β = -.34, p < .01), extraversion (β = .26, p < .01) and agreeableness (β = .16, p < .01), accounted for 34% of the optimism variance and insignificant variance was predicted exclusively by parental styles (1%). These findings suggest that personality is more important to optimism development than parental styles.

  2. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less

  3. Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.

    PubMed

    Vera, J Fernando; Macías, Rodrigo

    2017-06-01

    One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.

  4. On the Computation of the RMSEA and CFI from the Mean-And-Variance Corrected Test Statistic with Nonnormal Data in SEM.

    PubMed

    Savalei, Victoria

    2018-01-01

    A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.

  5. Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.

    PubMed

    Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L

    2005-12-01

    To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.

  6. Trends in Gender Differences in Academic Achievement from 1960 to 1994: An Analysis of Differences in Mean, Variance, and Extreme Scores.

    ERIC Educational Resources Information Center

    Nowell, Amy; Hedges, Larry V.

    1998-01-01

    Uses evidence from seven surveys of the U.S. 12th-grade population and the National Assessment of Educational Progress to show that gender differences in mean and variance in academic achievement are small from 1960 to 1994 but that differences in extreme scores are often substantial. (SLD)

  7. Mean-Reverting Portfolio With Budget Constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ziping; Palomar, Daniel P.

    2018-05-01

    This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.

  8. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-06-19

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less

  9. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2014-06-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  10. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  11. Host nutrition alters the variance in parasite transmission potential

    PubMed Central

    Vale, Pedro F.; Choisy, Marc; Little, Tom J.

    2013-01-01

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts. PMID:23407498

  12. Host nutrition alters the variance in parasite transmission potential.

    PubMed

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  13. Genetic basis of between-individual and within-individual variance of docility.

    PubMed

    Martin, J G A; Pirotta, E; Petelle, M B; Blumstein, D T

    2017-04-01

    Between-individual variation in phenotypes within a population is the basis of evolution. However, evolutionary and behavioural ecologists have mainly focused on estimating between-individual variance in mean trait and neglected variation in within-individual variance, or predictability of a trait. In fact, an important assumption of mixed-effects models used to estimate between-individual variance in mean traits is that within-individual residual variance (predictability) is identical across individuals. Individual heterogeneity in the predictability of behaviours is a potentially important effect but rarely estimated and accounted for. We used 11 389 measures of docility behaviour from 1576 yellow-bellied marmots (Marmota flaviventris) to estimate between-individual variation in both mean docility and its predictability. We then implemented a double hierarchical animal model to decompose the variances of both mean trait and predictability into their environmental and genetic components. We found that individuals differed both in their docility and in their predictability of docility with a negative phenotypic covariance. We also found significant genetic variance for both mean docility and its predictability but no genetic covariance between the two. This analysis is one of the first to estimate the genetic basis of both mean trait and within-individual variance in a wild population. Our results indicate that equal within-individual variance should not be assumed. We demonstrate the evolutionary importance of the variation in the predictability of docility and illustrate potential bias in models ignoring variation in predictability. We conclude that the variability in the predictability of a trait should not be ignored, and present a coherent approach for its quantification. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  14. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    PubMed

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  15. Formulation and demonstration of a robust mean variance optimization approach for concurrent airline network and aircraft design

    NASA Astrophysics Data System (ADS)

    Davendralingam, Navindran

    Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is

  16. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  17. Gender Variance and Educational Psychology: Implications for Practice

    ERIC Educational Resources Information Center

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  18. An Analysis of Variance Framework for Matrix Sampling.

    ERIC Educational Resources Information Center

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  19. Principal component of explained variance: An efficient and optimal data dimension reduction framework for association studies.

    PubMed

    Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie

    2018-05-01

    The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.

  20. Impact of Damping Uncertainty on SEA Model Response Variance

    NASA Technical Reports Server (NTRS)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  1. flowVS: channel-specific variance stabilization in flow cytometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  2. flowVS: channel-specific variance stabilization in flow cytometry

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  3. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  4. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  5. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  6. Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.

    PubMed

    Wang, Haizhou; Song, Mingzhou

    2011-12-01

    The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.

  7. Boundary Conditions for Scalar (Co)Variances over Heterogeneous Surfaces

    NASA Astrophysics Data System (ADS)

    Machulskaya, Ekaterina; Mironov, Dmitrii

    2018-05-01

    The problem of boundary conditions for the variances and covariances of scalar quantities (e.g., temperature and humidity) at the underlying surface is considered. If the surface is treated as horizontally homogeneous, Monin-Obukhov similarity suggests the Neumann boundary conditions that set the surface fluxes of scalar variances and covariances to zero. Over heterogeneous surfaces, these boundary conditions are not a viable choice since the spatial variability of various surface and soil characteristics, such as the ground fluxes of heat and moisture and the surface radiation balance, is not accounted for. Boundary conditions are developed that are consistent with the tile approach used to compute scalar (and momentum) fluxes over heterogeneous surfaces. To this end, the third-order transport terms (fluxes of variances) are examined analytically using a triple decomposition of fluctuating velocity and scalars into the grid-box mean, the fluctuation of tile-mean quantity about the grid-box mean, and the sub-tile fluctuation. The effect of the proposed boundary conditions on mixing in an archetypical stably-stratified boundary layer is illustrated with a single-column numerical experiment. The proposed boundary conditions should be applied in atmospheric models that utilize turbulence parametrization schemes with transport equations for scalar variances and covariances including the third-order turbulent transport (diffusion) terms.

  8. Robust Portfolio Optimization Using Pseudodistances.

    PubMed

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  9. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  10. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    NASA Astrophysics Data System (ADS)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  11. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  12. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  13. Analysis of Variance: Variably Complex

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…

  14. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  15. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  16. On the problem of data assimilation by means of synchronization

    NASA Astrophysics Data System (ADS)

    Szendro, Ivan G.; RodríGuez, Miguel A.; López, Juan M.

    2009-10-01

    The potential use of synchronization as a method for data assimilation is investigated in a Lorenz96 model. Data representing the reality are obtained from a Lorenz96 model with added noise. We study the assimilation scheme by means of synchronization for different noise intensities. We use a novel plot representation of the synchronization error in a phase diagram consisting of two variables: the amplitude and the width of the error after a suitable logarithmic transformation (the so-called mean-variance of logarithms diagram). Our main result concerns the existence of an "optimal" coupling for which the synchronization is maximal. We finally show how this allows us to quantify the degree of assimilation, providing a criterion for the selection of optimal couplings and validity of models.

  17. Training set optimization under population structure in genomic selection.

    PubMed

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  18. Optimization of PCR Condition: The First Study of High Resolution Melting Technique for Screening of APOA1 Variance.

    PubMed

    Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; Ep Mundhofir, Farmaditya; Mh Faradz, Sultana; Hisatome, Ichiro

    2017-03-01

    High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100-400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1 .

  19. Optimization of PCR Condition: The First Study of High Resolution Melting Technique for Screening of APOA1 Variance

    PubMed Central

    Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; EP Mundhofir, Farmaditya; MH Faradz, Sultana; Hisatome, Ichiro

    2017-01-01

    Background High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Methods Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Results Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100–400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. Conclusion In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1. PMID:28331418

  20. Beyond mean allelic effects: A locus at the major color gene MC1R associates also with differing levels of phenotypic and genetic (co)variance for coloration in barn owls.

    PubMed

    San-Jose, Luis M; Ducret, Valérie; Ducrest, Anne-Lyse; Simon, Céline; Roulin, Alexandre

    2017-10-01

    The mean phenotypic effects of a discovered variant help to predict major aspects of the evolution and inheritance of a phenotype. However, differences in the phenotypic variance associated to distinct genotypes are often overlooked despite being suggestive of processes that largely influence phenotypic evolution, such as interactions between the genotypes with the environment or the genetic background. We present empirical evidence for a mutation at the melanocortin-1-receptor gene, a major vertebrate coloration gene, affecting phenotypic variance in the barn owl, Tyto alba. The white MC1R allele, which associates with whiter plumage coloration, also associates with a pronounced phenotypic and additive genetic variance for distinct color traits. Contrarily, the rufous allele, associated with a rufous coloration, relates to a lower phenotypic and additive genetic variance, suggesting that this allele may be epistatic over other color loci. Variance differences between genotypes entailed differences in the strength of phenotypic and genetic associations between color traits, suggesting that differences in variance also alter the level of integration between traits. This study highlights that addressing variance differences of genotypes in wild populations provides interesting new insights into the evolutionary mechanisms and the genetic architecture underlying the phenotype. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  1. Quantifying noise in optical tweezers by allan variance.

    PubMed

    Czerwinski, Fabian; Richardson, Andrew C; Oddershede, Lene B

    2009-07-20

    Much effort is put into minimizing noise in optical tweezers experiments because noise and drift can mask fundamental behaviours of, e.g., single molecule assays. Various initiatives have been taken to reduce or eliminate noise but it has been difficult to quantify their effect. We propose to use Allan variance as a simple and efficient method to quantify noise in optical tweezers setups.We apply the method to determine the optimal measurement time, frequency, and detection scheme, and quantify the effect of acoustic noise in the lab. The method can also be used on-the-fly for determining optimal parameters of running experiments.

  2. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    NASA Astrophysics Data System (ADS)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  3. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  4. A Modified Mean Gray Wolf Optimization Approach for Benchmark and Biomedical Problems.

    PubMed

    Singh, Narinder; Singh, S B

    2017-01-01

    A modified variant of gray wolf optimization algorithm, namely, mean gray wolf optimization algorithm has been developed by modifying the position update (encircling behavior) equations of gray wolf optimization algorithm. The proposed variant has been tested on 23 standard benchmark well-known test functions (unimodal, multimodal, and fixed-dimension multimodal), and the performance of modified variant has been compared with particle swarm optimization and gray wolf optimization. Proposed algorithm has also been applied to the classification of 5 data sets to check feasibility of the modified variant. The results obtained are compared with many other meta-heuristic approaches, ie, gray wolf optimization, particle swarm optimization, population-based incremental learning, ant colony optimization, etc. The results show that the performance of modified variant is able to find best solutions in terms of high level of accuracy in classification and improved local optima avoidance.

  5. Quantifying cosmic variance

    NASA Astrophysics Data System (ADS)

    Driver, Simon P.; Robotham, Aaron S. G.

    2010-10-01

    We determine an expression for the cosmic variance of any `normal' galaxy survey based on examination of M* +/- 1 mag galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) data cube. We find that cosmic variance will depend on a number of factors principally: total survey volume, survey aspect ratio and whether the area surveyed is contiguous or comprising independent sightlines. As a rule of thumb cosmic variance falls below 10 per cent once a volume of 107h-30.7Mpc3 is surveyed for a single contiguous region with a 1:1 aspect ratio. Cosmic variance will be lower for higher aspect ratios and/or non-contiguous surveys. Extrapolating outside our test region we infer that cosmic variance in the entire SDSS DR7 main survey region is ~7 per cent to z < 0.1. The equation obtained from the SDSS DR7 region can be generalized to estimate the cosmic variance for any density measurement determined from normal galaxies (e.g. luminosity densities, stellar mass densities and cosmic star formation rates) within the volume range 103-107h-30.7Mpc3. We apply our equation to show that two sightlines are required to ensure that cosmic variance is <10 per cent in any ASKAP galaxy survey (divided into Δ z ~ 0.1 intervals, i.e. ~1Gyr intervals for z < 0.5). Likewise 10 MeerKAT sightlines will be required to meet the same conditions. GAMA, VVDS and zCOSMOS all suffer less than 10 per cent cosmic variance (~3-8 per cent) in Δ z intervals of 0.1, 0.25 and 0.5, respectively. Finally we show that cosmic variance is potentially at the 50-70 per cent level, or greater, in the Hubble Space Telescope (HST) Ultra Deep Field depending on assumptions as to the evolution of clustering. 100 or 10 independent sightlines will be required to reduce cosmic variance to a manageable level (<10 per cent) for HST ACS or HST WFC3 surveys, respectively (in Δ z ~ 1 intervals). Cosmic variance is therefore a significant factor in the z > 6 HST studies currently underway.

  6. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  7. σ -SCF: A Direct Energy-targeting Method To Mean-field Excited States

    NASA Astrophysics Data System (ADS)

    Ye, Hongzhou; Welborn, Matthew; Ricke, Nathan; van Voorhis, Troy

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g. Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF, tend to fall into the lowest solution consistent with a given symmetry - a problem known as ``variational collapse''. In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states - ground or excited - are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). This work was funded by a Grant from NSF (CHE-1464804).

  8. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  9. Gini estimation under infinite variance

    NASA Astrophysics Data System (ADS)

    Fontanari, Andrea; Taleb, Nassim Nicholas; Cirillo, Pasquale

    2018-07-01

    We study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α ∈(1 , 2)). We show that, in such a case, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods, because of a downward bias that emerges under fat tails. This has important implications for the ongoing discussion about economic inequality. We start by discussing how the nonparametric estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution, as the data distribution shifts from the domain of attraction of a light-tailed distribution to that of a fat-tailed one, especially in the case of infinite variance. We also show how the nonparametric Gini bias increases with lower values of α. We then prove that maximum likelihood estimation outperforms nonparametric methods, requiring a much smaller sample size to reach efficiency. Finally, for fat-tailed data, we provide a simple correction mechanism to the small sample bias of the nonparametric estimator based on the distance between the mode and the mean of its asymptotic distribution.

  10. Using variance structure to quantify responses to perturbation in fish catches

    USGS Publications Warehouse

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  11. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  12. Practice reduces task relevant variance modulation and forms nominal trajectory

    NASA Astrophysics Data System (ADS)

    Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.

  13. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    PubMed

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (<0.007). Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance

  14. Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models

    NASA Technical Reports Server (NTRS)

    Yoder, Dennis A.

    2016-01-01

    In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.

  15. A new variance stabilizing transformation for gene expression data analysis.

    PubMed

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  16. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  17. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  18. A Model-Free No-arbitrage Price Bound for Variance Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr; Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu

    2013-08-01

    We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.

  19. An optimized ensemble local mean decomposition method for fault detection of mechanical components

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang

    2017-03-01

    Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.

  20. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    NASA Astrophysics Data System (ADS)

    Deufel, Christopher L.; Furutani, Keith M.

    2014-02-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.

  1. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  2. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    PubMed

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  3. Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal

    ERIC Educational Resources Information Center

    Steinley, Douglas; Hubert, Lawrence

    2008-01-01

    This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…

  4. Accounting for connectivity and spatial correlation in the optimal placement of wildlife habitat

    Treesearch

    John Hof; Curtis H. Flather

    1996-01-01

    This paper investigates optimization approaches to simultaneously modelling habitat fragmentation and spatial correlation between patch populations. The problem is formulated with habitat connectivity affecting population means and variances, with spatial correlations accounted for in covariance calculations. Population with a pre-specifled confidence level is then...

  5. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  6. Belief Propagation Algorithm for Portfolio Optimization Problems

    PubMed Central

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462

  7. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  8. Boundary layer fluctuations and their effects on mean and variance temperature profiles in turbulent Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Wang, Yin; He, Xiaozhou; Tong, Penger

    2016-11-01

    We report simultaneous measurements of the mean temperature profile θ (z) and temperature variance profile η (z) near the lower conducting plate of a specially designed quasi-two-dimensional cell for turbulent Rayleigh-Bénard convection. The measured θ (z) is found to have a universal scaling form θ (z / δ) with varying thermal boundary layer (BL) thickness δ, and its functional form agrees well with the recently derived BL equation by Shishkina et al. The measured η (z) , on the other hand, is found to have a scaling form η (z / δ) only in the near-wall region with z / δ < 2 . Based on the experimental findings, we derive a new BL equation for η (z / δ) , which is in good agreement with the experimental results. The new BL equations thus provide a common framework for understanding the effect of BL fluctuations. This work was supported by the Research Grants Council of Hong Kong SAR and by the China Thousand Young Talents Program.

  9. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    ERIC Educational Resources Information Center

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  10. Ant colony algorithm for clustering in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Subekti, R.; Sari, E. R.; Kusumawati, R.

    2018-03-01

    This research aims to describe portfolio optimization using clustering methods with ant colony approach. Two stock portfolios of LQ45 Indonesia is proposed based on the cluster results obtained from ant colony optimization (ACO). The first portfolio consists of assets with ant colony displacement opportunities beyond the defined probability limits of the researcher, where the weight of each asset is determined by mean-variance method. The second portfolio consists of two assets with the assumption that each asset is a cluster formed from ACO. The first portfolio has a better performance compared to the second portfolio seen from the Sharpe index.

  11. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    PubMed

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  12. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  13. Defining ulnar variance in the adolescent wrist: measurement technique and interobserver reliability.

    PubMed

    Goldfarb, Charles A; Strauss, Nicole L; Wall, Lindley B; Calfee, Ryan P

    2011-02-01

    The measurement technique for ulnar variance in the adolescent population has not been well established. The purpose of this study was to assess the reliability of a standard ulnar variance assessment in the adolescent population. Four orthopedic surgeons measured 138 adolescent wrist radiographs for ulnar variance using a standard technique. There were 62 male and 76 female radiographs obtained in a standardized fashion for subjects aged 12 to 18 years. Skeletal age was used for analysis. We determined mean variance and assessed for differences related to age and gender. We also determined the interrater reliability. The mean variance was -0.7 mm for boys and -0.4 mm for girls; there was no significant difference between the 2 groups overall. When subdivided by age and gender, the younger group (≤ 15 y of age) was significantly less negative for girls (boys, -0.8 mm and girls, -0.3 mm, p < .05). There was no significant difference between boys and girls in the older group. The greatest difference between any 2 raters was 1 mm; exact agreement was obtained in 72 subjects. Correlations between raters were high (r(p) 0.87-0.97 in boys and 0.82-0.96 for girls). Interrater reliability was excellent (Cronbach's alpha, 0.97-0.98). Standard assessment techniques for ulnar variance are reliable in the adolescent population. Open growth plates did not interfere with this assessment. Young adolescent boys demonstrated a greater degree of negative ulnar variance compared with young adolescent girls. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  14. Optimal decision making on the basis of evidence represented in spike trains.

    PubMed

    Zhang, Jiaxiang; Bogacz, Rafal

    2010-05-01

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

  15. Spectral Ambiguity of Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  16. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  17. How does variance in fertility change over the demographic transition?

    PubMed Central

    Hruschka, Daniel J.; Burger, Oskar

    2016-01-01

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45–49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. PMID:27022082

  18. Technical and biological variance structure in mRNA-Seq data: life in the real world

    PubMed Central

    2012-01-01

    Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017

  19. Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis

    ERIC Educational Resources Information Center

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia

    2016-01-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…

  20. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series

    PubMed Central

    Fransson, Peter

    2016-01-01

    Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176

  1. Transient responses' optimization by means of set-based multi-objective evolution

    NASA Astrophysics Data System (ADS)

    Avigad, Gideon; Eisenstadt, Erella; Goldvard, Alex; Salomon, Shaul

    2012-04-01

    In this article, a novel solution to multi-objective problems involving the optimization of transient responses is suggested. It is claimed that the common approach of treating such problems by introducing auxiliary objectives overlooks tradeoffs that should be presented to the decision makers. This means that, if at some time during the responses, one of the responses is optimal, it should not be overlooked. An evolutionary multi-objective algorithm is suggested in order to search for these optimal solutions. For this purpose, state-wise domination is utilized with a new crowding measure for ordered sets being suggested. The approach is tested on both artificial as well as on real life problems in order to explain the methodology and demonstrate its applicability and importance. The results indicate that, from an engineering point of view, the approach possesses several advantages over existing approaches. Moreover, the applications highlight the importance of set-based evolution.

  2. Material saving by means of CWR technology using optimization techniques

    NASA Astrophysics Data System (ADS)

    Pérez, Iñaki; Ambrosio, Cristina

    2017-10-01

    Material saving is currently a must for the forging companies, as material costs sum up to 50% for parts made of steel and up to 90% in other materials like titanium. For long products, cross wedge rolling (CWR) technology can be used to obtain forging preforms with a suitable distribution of the material along its own axis. However, defining the correct preform dimensions is not an easy task and it could need an intensive trial-and-error campaign. To speed up the preform definition, it is necessary to apply optimization techniques on Finite Element Models (FEM) able to reproduce the material behaviour when being rolled. Meta-models Assisted Evolution Strategies (MAES), that combine evolutionary algorithms with Kriging meta-models, are implemented in FORGE® software and they allow reducing optimization computation costs in a relevant way. The paper shows the application of these optimization techniques to the definition of the right preform for a shaft from a vehicle of the agricultural sector. First, the current forging process, based on obtaining the forging preform by means of an open die forging operation, is showed. Then, the CWR preform optimization is developed by using the above mentioned optimization techniques. The objective is to reduce, as much as possible, the initial billet weight, so that a calculation of flash weight reduction due to the use of the proposed preform is stated. Finally, a simulation of CWR process for the defined preform is carried out to check that most common failures (necking, spirals,..) in CWR do not appear in this case.

  3. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  4. Variance Assistance Document: Land Disposal Restrictions Treatability Variances and Determinations of Equivalent Treatment

    EPA Pesticide Factsheets

    This document provides assistance to those seeking to submit a variance request for LDR treatability variances and determinations of equivalent treatment regarding the hazardous waste land disposal restrictions program.

  5. Markowitz portfolio optimization model employing fuzzy measure

    NASA Astrophysics Data System (ADS)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  6. Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results

    NASA Astrophysics Data System (ADS)

    Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef

    2017-04-01

    The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.

  7. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  8. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  9. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    NASA Astrophysics Data System (ADS)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  10. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    PubMed

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  11. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model

    PubMed Central

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach. PMID:26225761

  12. Minimum-variance Brownian motion control of an optically trapped probe.

    PubMed

    Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang

    2009-10-20

    This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.

  13. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... practical difficulty or unnecessary hardship; and (3) Clearly set forth the specific alternative measures... variance. (b) Upon receipt of a written request for a variance under paragraph (a) of this section, the...

  14. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... practical difficulty or unnecessary hardship; and (3) Clearly set forth the specific alternative measures... variance. (b) Upon receipt of a written request for a variance under paragraph (a) of this section, the...

  15. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... practical difficulty or unnecessary hardship; and (3) Clearly set forth the specific alternative measures... variance. (b) Upon receipt of a written request for a variance under paragraph (a) of this section, the...

  16. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... subpart by filing a written application for such a variance with the local Job Service office serving the... practical difficulty or unnecessary hardship; and (3) Clearly set forth the specific alternative measures... variance. (b) Upon receipt of a written request for a variance under paragraph (a) of this section, the...

  17. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  18. Variance component and breeding value estimation for genetic heterogeneity of residual variance in Swedish Holstein dairy cattle.

    PubMed

    Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E

    2013-04-01

    Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

    PubMed Central

    Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  20. Hedged Monte-Carlo: low variance derivative pricing with objective probabilities

    NASA Astrophysics Data System (ADS)

    Potters, Marc; Bouchaud, Jean-Philippe; Sestovic, Dragan

    2001-01-01

    We propose a new ‘hedged’ Monte-Carlo ( HMC) method to price financial derivatives, which allows to determine simultaneously the optimal hedge. The inclusion of the optimal hedging strategy allows one to reduce the financial risk associated with option trading, and for the very same reason reduces considerably the variance of our HMC scheme as compared to previous methods. The explicit accounting of the hedging cost naturally converts the objective probability into the ‘risk-neutral’ one. This allows a consistent use of purely historical time series to price derivatives and obtain their residual risk. The method can be used to price a large class of exotic options, including those with path dependent and early exercise features.

  1. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  2. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    PubMed

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  3. Vertical velocity variance in the mixed layer from radar wind profilers

    USGS Publications Warehouse

    Eng, K.; Coulter, R.L.; Brutsaert, W.

    2003-01-01

    Vertical velocity variance data were derived from remotely sensed mixed layer turbulence measurements at the Atmospheric Boundary Layer Experiments (ABLE) facility in Butler County, Kansas. These measurements and associated data were provided by a collection of instruments that included two 915 MHz wind profilers, two radio acoustic sounding systems, and two eddy correlation devices. The data from these devices were available through the Atmospheric Boundary Layer Experiment (ABLE) database operated by Argonne National Laboratory. A signal processing procedure outlined by Angevine et al. was adapted and further built upon to derive vertical velocity variance, w_pm???2, from 915 MHz wind profiler measurements in the mixed layer. The proposed procedure consisted of the application of a height-dependent signal-to-noise ratio (SNR) filter, removal of outliers plus and minus two standard deviations about the mean on the spectral width squared, and removal of the effects of beam broadening and vertical shearing of horizontal winds. The scatter associated with w_pm???2 was mainly affected by the choice of SNR filter cutoff values. Several different sets of cutoff values were considered, and the optimal one was selected which reduced the overall scatter on w_pm???2 and yet retained a sufficient number of data points to average. A similarity relationship of w_pm???2 versus height was established for the mixed layer on the basis of the available data. A strong link between the SNR and growth/decay phases of turbulence was identified. Thus, the mid to late afternoon hours, when strong surface heating occurred, were observed to produce the highest quality signals.

  4. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  5. [Hygienic optimization of the use of chemical protective means on railway transport].

    PubMed

    Kaptsov, V A; Pankova, V B; Elizarov, B B; Mezentsev, A P; Komleva, E A

    2004-01-01

    The paper presents data characterizing the working conditions of railway workers. It shows that there is the greatest levels of noise and vibration, the burden and intensity of work. The worst working conditions are noted in energy supply, car, locomotive services and track facilities. The working conditions determine a significant industrial risk of railway workers since the prevention of health abnormalities by using chemical protective means is a topical problem. The priority lines of hygienic rationale for optimization the choice and use of chemical protective means for workers exposed to occupational hazards are determined.

  6. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  7. σ-SCF: A direct energy-targeting method to mean-field excited states

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy

    2017-12-01

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  8. σ-SCF: A direct energy-targeting method to mean-field excited states.

    PubMed

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy

    2017-12-07

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  9. Robust optimization based upon statistical theory.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  10. Simultaneous Inference Procedures for Means.

    ERIC Educational Resources Information Center

    Krishnaiah, P. R.

    Some aspects of simultaneous tests for means are reviewed. Specifically, the comparison of univariate or multivariate normal populations based on the values of the means or mean vectors when the variances or covariance matrices are equal is discussed. Tukey's and Dunnett's tests for multiple comparisons of means, Scheffe's method of examining…

  11. Null steering of adaptive beamforming using linear constraint minimum variance assisted by particle swarm optimization, dynamic mutated artificial immune system, and gravitational search algorithm.

    PubMed

    Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.

  12. Null Steering of Adaptive Beamforming Using Linear Constraint Minimum Variance Assisted by Particle Swarm Optimization, Dynamic Mutated Artificial Immune System, and Gravitational Search Algorithm

    PubMed Central

    Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859

  13. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  14. Optimal Solar PV Arrays Integration for Distributed Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omitaomu, Olufemi A; Li, Xueping

    2012-01-01

    Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introducemore » quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.« less

  15. Effect of natural inbreeding on variance structure in tests of wind pollination Douglas-fir progenies.

    Treesearch

    Frank C. Sorensen; T.L. White

    1988-01-01

    Studies of the mating habits of Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) have shown that wind-pollination families contain a small proportion of very slow-growing natural inbreds.The effect of these very small trees on means, variances, and variance ratios was evaluated for height and diameter in a 16-year-old plantation by...

  16. WASP (Write a Scientific Paper) using Excel 9: Analysis of variance.

    PubMed

    Grech, Victor

    2018-06-01

    Analysis of variance (ANOVA) may be required by researchers as an inferential statistical test when more than two means require comparison. This paper explains how to perform ANOVA in Microsoft Excel. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  18. Deterministic theory of Monte Carlo variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueki, T.; Larsen, E.W.

    1996-12-31

    The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less

  19. Turbulence Variance Characteristics in the Unstable Atmospheric Boundary Layer above Flat Pine Forest

    NASA Astrophysics Data System (ADS)

    Asanuma, Jun

    Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original

  20. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  1. Variance components estimation for continuous and discrete data, with emphasis on cross-classified sampling designs

    USGS Publications Warehouse

    Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).

  2. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  3. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  4. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  5. A diffusion-based approach to stochastic individual growth and energy budget, with consequences to life-history optimization and population dynamics.

    PubMed

    Filin, I

    2009-06-01

    Using diffusion processes, I model stochastic individual growth, given exogenous hazards and starvation risk. By maximizing survival to final size, optimal life histories (e.g. switching size for habitat/dietary shift) are determined by two ratios: mean growth rate over growth variance (diffusion coefficient) and mortality rate over mean growth rate; all are size dependent. For example, switching size decreases with either ratio, if both are positive. I provide examples and compare with previous work on risk-sensitive foraging and the energy-predation trade-off. I then decompose individual size into reversibly and irreversibly growing components, e.g. reserves and structure. I provide a general expression for optimal structural growth, when reserves grow stochastically. I conclude that increased growth variance of reserves delays structural growth (raises threshold size for its commencement) but may eventually lead to larger structures. The effect depends on whether the structural trait is related to foraging or defence. Implications for population dynamics are discussed.

  6. Parasitism alters three power laws of scaling in a metazoan community: Taylor's law, density-mass allometry, and variance-mass allometry.

    PubMed

    Lagrue, Clément; Poulin, Robert; Cohen, Joel E

    2015-02-10

    How do the lifestyles (free-living unparasitized, free-living parasitized, and parasitic) of animal species affect major ecological power-law relationships? We investigated this question in metazoan communities in lakes of Otago, New Zealand. In 13,752 samples comprising 1,037,058 organisms, we found that species of different lifestyles differed in taxonomic distribution and body mass and were well described by three power laws: a spatial Taylor's law (the spatial variance in population density was a power-law function of the spatial mean population density); density-mass allometry (the spatial mean population density was a power-law function of mean body mass); and variance-mass allometry (the spatial variance in population density was a power-law function of mean body mass). To our knowledge, this constitutes the first empirical confirmation of variance-mass allometry for any animal community. We found that the parameter values of all three relationships differed for species with different lifestyles in the same communities. Taylor's law and density-mass allometry accurately predicted the form and parameter values of variance-mass allometry. We conclude that species of different lifestyles in these metazoan communities obeyed the same major ecological power-law relationships but did so with parameters specific to each lifestyle, probably reflecting differences among lifestyles in population dynamics and spatial distribution.

  7. Excitation variance matching with limited configuration interaction expansions in variational Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric

    In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less

  8. Excitation variance matching with limited configuration interaction expansions in variational Monte Carlo

    DOE PAGES

    Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric

    2017-10-28

    In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less

  9. 10 CFR 851.30 - Consideration of variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.30 Consideration of variances. (a) Variances shall be granted by the Under Secretary after considering the recommendation of the Chief Health, Safety and Security Officer. The authority to grant a variance cannot be delegated. (b) The application...

  10. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a written...) The CSO may forward the application to the Chief Health, Safety and Security Officer. (2) If the CSO...

  11. Turnover, account value and diversification of real traders: evidence of collective portfolio optimizing behavior

    NASA Astrophysics Data System (ADS)

    Morton de Lachapelle, David; Challet, Damien

    2010-07-01

    Despite the availability of very detailed data on financial markets, agent-based modeling is hindered by the lack of information about real trader behavior. This makes it impossible to validate agent-based models, which are thus reverse-engineering attempts. This work is a contribution towards building a set of stylized facts about the traders themselves. Using the client database of Swissquote Bank SA, the largest online Swiss broker, we find empirical relationships between turnover, account values and the number of assets in which a trader is invested. A theory based on simple mean-variance portfolio optimization that crucially includes variable transaction costs is able to reproduce faithfully the observed behaviors. We finally argue that our results bring to light the collective ability of a population to construct a mean-variance portfolio that takes into account the structure of transaction costs.

  12. Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumonteil, E.; Malvagi, F.

    2012-07-01

    The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solutionmore » is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)« less

  13. Hydraulic geometry of river cross sections; theory of minimum variance

    USGS Publications Warehouse

    Williams, Garnett P.

    1978-01-01

    This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)

  14. VARIANCE OF MICROSOMAL PROTEIN AND ...

    EPA Pesticide Factsheets

    Differences in the pharmacokinetics of xenobiotics among humans makes them differentially susceptible to risk. Differences in enzyme content can mediate pharmacokinetic differences. Microsomal protein is often isolated fromliver to characterize enzyme content and activity, but no measures exist to extrapolate these data to the intact liver. Measures were developed from up to 60 samples of adult human liver to characterize the content of microsomal protein and cytochrome P450 (CYP) enzymes. Statistical evaluations are necessary to estimate values far from the mean value. Adult human liver contains 52.9 - 1.476 mg microsomal protein per g; 2587 - 1.84 pmoles CYP2E1 per g; and 5237 - 2.214 pmols CYP3A per g (geometric mean - geometric standard deviation). These values are useful for identifying and testing susceptibility as a function of enzyme content when used to extrapolate in vitro rates of chemical metabolism for input to physiologically based pharmacokinetic models which can then be exercised to quantify the effect of variance in enzyme expression on risk-relevant pharmacokinetic outcomes.

  15. Pareto-optimal estimates that constrain mean California precipitation change

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  16. Testing Small Variance Priors Using Prior-Posterior Predictive p Values.

    PubMed

    Hoijtink, Herbert; van de Schoot, Rens

    2017-04-03

    Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Gravity Wave Variances and Propagation Derived from AIRS Radiances

    NASA Technical Reports Server (NTRS)

    Gong, Jie; Wu, Dong L.; Eckermann, S. D.

    2012-01-01

    As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).

  18. Monogamy has a fixation advantage based on fitness variance in an ideal promiscuity group.

    PubMed

    Garay, József; Móri, Tamás F

    2012-11-01

    We consider an ideal promiscuity group of females, which implies that all males have the same average mating success. If females have concealed ovulation, then the males' paternity chances are equal. We find that male-based monogamy will be fixed in females' promiscuity group when the stochastic Darwinian selection is described by a Markov chain.We point out that in huge populations the relative advantage (difference between average fitness of different strategies) determines primarily the end of evolution; in the case of neutrality (means are equal) the smallest variance guarantees fixation (absorption) advantage; when the means and variances are the same, then the higher third moment determines which types will be fixed in the Markov chains.

  19. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated entity... confidential information in reaching a decision on a variance application. Interested members of the public...

  20. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  1. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less

  2. Estimation of internal organ motion-induced variance in radiation dose in non-gated radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhou, Sumin; Zhu, Xiaofeng; Zhang, Mutian; Zheng, Dandan; Lei, Yu; Li, Sicong; Bennion, Nathan; Verma, Vivek; Zhen, Weining; Enke, Charles

    2016-12-01

    In the delivery of non-gated radiotherapy (RT), owing to intra-fraction organ motion, a certain degree of RT dose uncertainty is present. Herein, we propose a novel mathematical algorithm to estimate the mean and variance of RT dose that is delivered without gating. These parameters are specific to individual internal organ motion, dependent on individual treatment plans, and relevant to the RT delivery process. This algorithm uses images from a patient’s 4D simulation study to model the actual patient internal organ motion during RT delivery. All necessary dose rate calculations are performed in fixed patient internal organ motion states. The analytical and deterministic formulae of mean and variance in dose from non-gated RT were derived directly via statistical averaging of the calculated dose rate over possible random internal organ motion initial phases, and did not require constructing relevant histograms. All results are expressed in dose rate Fourier transform coefficients for computational efficiency. Exact solutions are provided to simplified, yet still clinically relevant, cases. Results from a volumetric-modulated arc therapy (VMAT) patient case are also presented. The results obtained from our mathematical algorithm can aid clinical decisions by providing information regarding both mean and variance of radiation dose to non-gated patients prior to RT delivery.

  3. Weighting Mean and Variability during Confidence Judgments

    PubMed Central

    de Gardelle, Vincent; Mamassian, Pascal

    2015-01-01

    Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275

  4. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    ERIC Educational Resources Information Center

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  5. Evaluation of different approaches for identifying optimal sites to predict mean hillslope soil moisture content

    NASA Astrophysics Data System (ADS)

    Liao, Kaihua; Zhou, Zhiwen; Lai, Xiaoming; Zhu, Qing; Feng, Huihui

    2017-04-01

    The identification of representative soil moisture sampling sites is important for the validation of remotely sensed mean soil moisture in a certain area and ground-based soil moisture measurements in catchment or hillslope hydrological studies. Numerous approaches have been developed to identify optimal sites for predicting mean soil moisture. Each method has certain advantages and disadvantages, but they have rarely been evaluated and compared. In our study, surface (0-20 cm) soil moisture data from January 2013 to March 2016 (a total of 43 sampling days) were collected at 77 sampling sites on a mixed land-use (tea and bamboo) hillslope in the hilly area of Taihu Lake Basin, China. A total of 10 methods (temporal stability (TS) analyses based on 2 indices, K-means clustering based on 6 kinds of inputs and 2 random sampling strategies) were evaluated for determining optimal sampling sites for mean soil moisture estimation. They were TS analyses based on the smallest index of temporal stability (ITS, a combination of the mean relative difference and standard deviation of relative difference (SDRD)) and based on the smallest SDRD, K-means clustering based on soil properties and terrain indices (EFs), repeated soil moisture measurements (Theta), EFs plus one-time soil moisture data (EFsTheta), and the principal components derived from EFs (EFs-PCA), Theta (Theta-PCA), and EFsTheta (EFsTheta-PCA), and global and stratified random sampling strategies. Results showed that the TS based on the smallest ITS was better (RMSE = 0.023 m3 m-3) than that based on the smallest SDRD (RMSE = 0.034 m3 m-3). The K-means clustering based on EFsTheta (-PCA) was better (RMSE <0.020 m3 m-3) than these based on EFs (-PCA) and Theta (-PCA). The sampling design stratified by the land use was more efficient than the global random method. Forty and 60 sampling sites are needed for stratified sampling and global sampling respectively to make their performances comparable to the best K-means

  6. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who cannot... reaching a decision on a variance application. Interested members of the public will be allowed a...

  7. Multiobjective robust design of the double wishbone suspension system based on particle swarm optimization.

    PubMed

    Cheng, Xianfu; Lin, Yuqun

    2014-01-01

    The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.

  8. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  9. Optimal portfolio strategy with cross-correlation matrix composed by DCCA coefficients: Evidence from the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Sun, Xuelian; Liu, Zixian

    2016-02-01

    In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.

  10. In vivo recovery of factor VIII and factor IX: intra- and interindividual variance in a clinical setting.

    PubMed

    Björkman, S; Folkesson, A; Berntorp, E

    2007-01-01

    In vivo recovery (IVR) is traditionally used as a parameter to characterize the pharmacokinetic properties of coagulation factors. It has also been suggested that dosing of factor VIII (FVIII) and factor IX (FIX) can be adjusted according to the need of the individual patient, based on an individually determined IVR value. This approach, however, requires that the individual IVR value is more reliably representative for the patient than the mean value in the population, i.e. that there is less variance within than between the individuals. The aim of this investigation was to compare intra- and interindividual variance in IVR (as U dL1 per U kg1) for FVIII and plasma-derived FIX in a cohort of non-bleeding patients with haemophilia. The data were collected retrospectively from six clinical studies, yielding 297 IVR determinations in 50 patients with haemophilia A and 93 determinations in 13 patients with haemophilia B. For FVIII, the mean variance within patients exceeded the between-patient variance. Thus, an individually determined IVR value is apparently no more informative than an average, or population, value for the dosing of FVIII. There was no apparent relationship between IVR and age of the patient (1.5-67 years). For FIX, the mean variance within patients was lower than the between-patient variance, and there was a significant positive relationship between IVR and age (13-69 years). From these data, it seems probable that using an individual IVR confers little advantage in comparison to using an age-specific population mean value. Dose tailoring of coagulation factor treatment has been applied successfully after determination of the entire single-dose curve of FVIII:C or FIX:C in the patient and calculation of the relevant pharmacokinetic parameters. However, the findings presented here do not support the assumption that dosing of FVIII or FIX can be individualized on the basis of a clinically determined IVR value.

  11. [EuCliD 5TM Clinic Variance Report: a means to improve the safety of patients and staff].

    PubMed

    Oggero, Anna Rita; Palmieri, Veronica; Cerreto, Maria; Manna, Luisa; Lettieri, Iolanda; Napoli, Antonio; Ravone, Virginia; Pelliccia, Francesco; Moretti, Manuela; Parisotto, Maria Teresa

    2010-01-01

    The collection of information about events in the healthcare sector has been documented internationally for more than 25 years. Incident reporting is used for the structured acquisition of information about adverse events to improve patient and healthcare staff safety, prepare corrective action, and prevent event recurrence in the future. The establishment of an incident reporting system requires that the staff involved should be capable of recognizing events which require reporting. The aim of this work was to encourage operators to use the incident reporting system and gradually achieve 100% compliance in the reporting of adverse events and corrective and preventive actions taken. The project was carried out by the staff of one NephroCare dialysis center. The parameters observed were how many times the Variance Report was used, how problems were analyzed, and how many times and by what means the medical and nursing staff took action to correct problems. Ten months from the start of the project 100% reporting was achieved. All selected adverse advents were correctly reported and corrective or preventive action was taken to improve patient care and dialysis center organization. Only effective feedback on the results achieved in terms of safety and tangible improvements by staff will allow the number of reports to be kept high, and maintain participants' compliance with the incident reporting system over the long term.

  12. Ex Post Facto Monte Carlo Variance Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, Thomas E.

    The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less

  13. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    PubMed

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  14. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  15. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  16. Argentine Population Genetic Structure: Large Variance in Amerindian Contribution

    PubMed Central

    Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.

    2011-01-01

    Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183

  17. Subspace K-means clustering.

    PubMed

    Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla

    2013-12-01

    To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).

  18. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  19. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  20. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  1. Fast mean and variance computation of the diffuse sound transmission through finite-sized thick and layered wall and floor systems

    NASA Astrophysics Data System (ADS)

    Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.

    2018-05-01

    A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.

  2. Parasitism alters three power laws of scaling in a metazoan community: Taylor’s law, density-mass allometry, and variance-mass allometry

    PubMed Central

    Lagrue, Clément; Poulin, Robert; Cohen, Joel E.

    2015-01-01

    How do the lifestyles (free-living unparasitized, free-living parasitized, and parasitic) of animal species affect major ecological power-law relationships? We investigated this question in metazoan communities in lakes of Otago, New Zealand. In 13,752 samples comprising 1,037,058 organisms, we found that species of different lifestyles differed in taxonomic distribution and body mass and were well described by three power laws: a spatial Taylor’s law (the spatial variance in population density was a power-law function of the spatial mean population density); density-mass allometry (the spatial mean population density was a power-law function of mean body mass); and variance-mass allometry (the spatial variance in population density was a power-law function of mean body mass). To our knowledge, this constitutes the first empirical confirmation of variance-mass allometry for any animal community. We found that the parameter values of all three relationships differed for species with different lifestyles in the same communities. Taylor's law and density-mass allometry accurately predicted the form and parameter values of variance-mass allometry. We conclude that species of different lifestyles in these metazoan communities obeyed the same major ecological power-law relationships but did so with parameters specific to each lifestyle, probably reflecting differences among lifestyles in population dynamics and spatial distribution. PMID:25550506

  3. Susceptibility-weighted imaging using inter-echo-variance channel combination for improved contrast at 7 tesla.

    PubMed

    Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria

    2017-04-01

    To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Variance of the Quantum Dwell Time for a Nonrelativistic Particle

    NASA Technical Reports Server (NTRS)

    Hahne, Gerhard

    2012-01-01

    Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.

  5. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  6. Finding meaning in loss: the mediating role of social support between personality and two construals of meaning.

    PubMed

    Boyraz, Güler; Horne, Sharon G; Sayger, Thomas V

    2012-07-01

    Dimensions of personality may shape an individual's response to loss both directly and indirectly through its effects on other variables such as an individual's ability to seek social support. The mediating effect of social support on the relationship between personality (i.e., extraversion and neuroticism) and 2 construals of meaning (i.e., sense-making and benefit-finding) among 325 bereaved individuals was explored using path analysis. Supporting our hypotheses, social support mediated the relationship between personality and construals of meaning. Neuroticism was negatively and indirectly associated with both sense-making and benefit-finding through social support. Extraversion had a significant positive relationship to social support, which, in turn, mediated the impact of extraversion on both sense-making and benefit finding. The model explained 35% of the variance in social support, 19% of the variance in sense-making, and 25% of the variance in benefit-finding. Implications are discussed in light of existing theories of bereavement and loss.

  7. Arterial cannula shape optimization by means of the rotational firefly algorithm

    NASA Astrophysics Data System (ADS)

    Tesch, K.; Kaczorowska, K.

    2016-03-01

    This article presents global optimization results of arterial cannula shapes by means of the newly modified firefly algorithm. The search for the optimal arterial cannula shape is necessary in order to minimize losses and prepare the flow that leaves the circulatory support system of a ventricle (i.e. blood pump) before it reaches the heart. A modification of the standard firefly algorithm, the so-called rotational firefly algorithm, is introduced. It is shown that the rotational firefly algorithm allows for better exploration of search spaces which results in faster convergence and better solutions in comparison with its standard version. This is particularly pronounced for smaller population sizes. Furthermore, it maintains greater diversity of populations for a longer time. A small population size and a low number of iterations are necessary to keep to a minimum the computational cost of the objective function of the problem, which comes from numerical solution of the nonlinear partial differential equations. Moreover, both versions of the firefly algorithm are compared to the state of the art, namely the differential evolution and covariance matrix adaptation evolution strategies.

  8. An Optimal Mean Based Block Robust Feature Extraction Method to Identify Colorectal Cancer Genes with Integrated Data.

    PubMed

    Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui

    2017-08-17

    It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.

  9. Optimization under uncertainty of parallel nonlinear energy sinks

    NASA Astrophysics Data System (ADS)

    Boroson, Ethan; Missoum, Samy; Mattei, Pierre-Olivier; Vergez, Christophe

    2017-04-01

    Nonlinear Energy Sinks (NESs) are a promising technique for passively reducing the amplitude of vibrations. Through nonlinear stiffness properties, a NES is able to passively and irreversibly absorb energy. Unlike the traditional Tuned Mass Damper (TMD), NESs do not require a specific tuning and absorb energy over a wider range of frequencies. Nevertheless, they are still only efficient over a limited range of excitations. In order to mitigate this limitation and maximize the efficiency range, this work investigates the optimization of multiple NESs configured in parallel. It is well known that the efficiency of a NES is extremely sensitive to small perturbations in loading conditions or design parameters. In fact, the efficiency of a NES has been shown to be nearly discontinuous in the neighborhood of its activation threshold. For this reason, uncertainties must be taken into account in the design optimization of NESs. In addition, the discontinuities require a specific treatment during the optimization process. In this work, the objective of the optimization is to maximize the expected value of the efficiency of NESs in parallel. The optimization algorithm is able to tackle design variables with uncertainty (e.g., nonlinear stiffness coefficients) as well as aleatory variables such as the initial velocity of the main system. The optimal design of several parallel NES configurations for maximum mean efficiency is investigated. Specifically, NES nonlinear stiffness properties, considered random design variables, are optimized for cases with 1, 2, 3, 4, 5, and 10 NESs in parallel. The distributions of efficiency for the optimal parallel configurations are compared to distributions of efficiencies of non-optimized NESs. It is observed that the optimization enables a sharp increase in the mean value of efficiency while reducing the corresponding variance, thus leading to more robust NES designs.

  10. Structure analysis of simulated molecular clouds with the Δ-variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less

  11. Structure analysis of simulated molecular clouds with the Δ-variance

    DOE PAGES

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    2015-05-27

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less

  12. Speed Variance and Its Influence on Accidents.

    ERIC Educational Resources Information Center

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  13. CAN'T MISS--conquer any number task by making important statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.

  14. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR LAND MANAGEMENT AND USE Requirements for Flood Plain Management Regulations § 60.6 Variances and... variances from the criteria set forth in §§ 60.3, 60.4, and 60.5. The issuance of a variance is for flood...

  15. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR LAND MANAGEMENT AND USE Requirements for Flood Plain Management Regulations § 60.6 Variances and... variances from the criteria set forth in §§ 60.3, 60.4, and 60.5. The issuance of a variance is for flood...

  16. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

    ERIC Educational Resources Information Center

    Steyn, H. S., Jr.; Ellis, S. M.

    2009-01-01

    When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

  17. A Nonparametric Test for Homogeneity of Variances: Application to GPAs of Students across Academic Majors

    ERIC Educational Resources Information Center

    Bakir, Saad T.

    2010-01-01

    We propose a nonparametric (or distribution-free) procedure for testing the equality of several population variances (or scale parameters). The proposed test is a modification of Bakir's (1989, Commun. Statist., Simul-Comp., 18, 757-775) analysis of means by ranks (ANOMR) procedure for testing the equality of several population means. A proof is…

  18. Additive Partial Least Squares for efficient modelling of independent variance sources demonstrated on practical case studies.

    PubMed

    Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus

    2018-05-12

    A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Blind Deconvolution Method of Image Deblurring Using Convergence of Variance

    DTIC Science & Technology

    2011-03-24

    random variable x is [9] fX (x) = 1√ 2πσ e−(x−m) 2/2σ2 −∞ < x <∞, σ > 0 (6) where m is the mean and σ is the variance. 7 Figure 1: Gaussian distribution...of the MAP Estimation algorithm when N was set to 50. The APEX method is not without its own difficulties when dealing with astro - nomical data

  20. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  1. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  2. Optimization of thiamethoxam adsorption parameters using multi-walled carbon nanotubes by means of fractional factorial design.

    PubMed

    Panić, Sanja; Rakić, Dušan; Guzsvány, Valéria; Kiss, Erne; Boskovic, Goran; Kónya, Zoltán; Kukovecz, Ákos

    2015-12-01

    The aim of this work was to evaluate significant factors affecting the thiamethoxam adsorption efficiency using oxidized multi-walled carbon nanotubes (MWCNTs) as adsorbents. Five factors (initial solution concentration of thiamethoxam in water, temperature, solution pH, MWCNTs weight and contact time) were investigated using 2V(5-1) fractional factorial design. The obtained linear model was statistically tested using analysis of variance (ANOVA) and the analysis of residuals was used to investigate the model validity. It was observed that the factors and their second-order interactions affecting the thiamethoxam removal can be divided into three groups: very important, moderately important and insignificant ones. The initial solution concentration was found to be the most influencing parameter on thiamethoxam adsorption from water. Optimization of the factors levels was carried out by minimizing those parameters which are usually critical in real life: the temperature (energy), contact time (money) and weight of MWCNTs (potential health hazard), in order to maximize the adsorbed amount of the pollutant. The results of maximal adsorbed thiamethoxam amount in both real and optimized experiments indicate that among minimized parameters the adsorption time is one that makes the largest difference. The results of this study indicate that fractional factorial design is very useful tool for screening the higher number of parameters and reducing the number of adsorption experiments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. SU-E-T-367: Optimization of DLG Using TG-119 Test Cases and a Weighted Mean Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sintay, B; Vanderstraeten, C; Terrell, J

    2014-06-01

    Purpose: Optimization of the dosimetric leaf gap (DLG) is an important step in commissioning the Eclipse treatment planning system for sliding window intensity-modulated radiation therapy (SW-IMRT) and RapidArc. Often the values needed for optimal dose delivery differ markedly from those measured at commissioning. We present a method to optimize this value using the AAPM TG-119 test cases. Methods: For SW-IMRT and RapidArc, TG-119 based test plans were created using a water-equivalent phantom. Dose distributions measured on film and ion chamber (IC) readings taken in low-gradient regions within the targets were analyzed separately. Since DLG is a single value per energy,more » SW-IMRT and RapidArc must be considered simultaneously. Plans were recalculated using a linear sweep from 0.02cm (the minimum DLG) to 0.3 cm. The calculated point doses were compared to the measured doses for each plan, and based on these comparisons an optimal DLG value was computed for each plan. TG-119 cases are designed to push the system in various ways, thus, a weighted mean of the DLG was computed where the relative importance of each type of plan was given a score from 0.0 to 1.0. Finally, SW-IMRT and RapidArc are assigned an overall weight based on clinical utilization. Our routine patient-QA (PQA) process was performed as independent validation. Results: For a Varian TrueBeam, the optimized DLG varied with σ = 0.044cm for SW-IMRT and σ = 0.035cm for RapidArc. The difference between the weighted mean SW-IMRT and RapidArc value was 0.038cm. We predicted utilization of 25% SW-IMRT and 75% RapidArc. The resulting DLG was ~1mm different than that found by commissioning and produced an average error of <1% for SW-IMRT and RapidArc PQA test cases separately. Conclusion: The weighted mean method presented is a useful tool for determining an optimal DLG value for commissioning Eclipse.« less

  4. Meaning-making intervention during breast or colorectal cancer treatment improves self-esteem, optimism, and self-efficacy.

    PubMed

    Lee, Virginia; Robin Cohen, S; Edgar, Linda; Laizner, Andrea M; Gagnon, Anita J

    2006-06-01

    Existential issues often accompany a diagnosis of cancer and remain one aspect of psychosocial oncology care for which there is a need for focused, empirically tested interventions. This study examined the efficacy of a novel psychological intervention specifically designed to address existential issues through the use of meaning-making coping strategies on psychological adjustment to cancer. Eighty-two breast or colorectal cancer patients were randomly chosen to receive routine care (control group) or up to four sessions that explored the meaning of the emotional responses and cognitive appraisals of each individual's cancer experience within the context of past life events and future goals (experimental group). This paper reports the results from 74 patients who completed and returned pre- and post-test measures for self-esteem, optimism, and self-efficacy. After controlling for baseline scores, the experimental group participants demonstrated significantly higher levels of self-esteem, optimism, and self-efficacy compared to the control group. The results are discussed in light of the theoretical and clinical implications of meaning-making coping in the context of stress and illness.

  5. Heterogeneous network epidemics: real-time growth, variance and extinction of infection.

    PubMed

    Ball, Frank; House, Thomas

    2017-09-01

    Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.

  6. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of a...

  7. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of a...

  8. Antithetic proportional-integral feedback for reduced variance and improved control performance of stochastic reaction networks.

    PubMed

    Briat, Corentin; Gupta, Ankit; Khammash, Mustafa

    2018-06-01

    The ability of a cell to regulate and adapt its internal state in response to unpredictable environmental changes is called homeostasis and this ability is crucial for the cell's survival and proper functioning. Understanding how cells can achieve homeostasis, despite the intrinsic noise or randomness in their dynamics, is fundamentally important for both systems and synthetic biology. In this context, a significant development is the proposed antithetic integral feedback (AIF) motif, which is found in natural systems, and is known to ensure robust perfect adaptation for the mean dynamics of a given molecular species involved in a complex stochastic biomolecular reaction network. From the standpoint of applications, one drawback of this motif is that it often leads to an increased cell-to-cell heterogeneity or variance when compared to a constitutive (i.e. open-loop) control strategy. Our goal in this paper is to show that this performance deterioration can be countered by combining the AIF motif and a negative feedback strategy. Using a tailored moment closure method, we derive approximate expressions for the stationary variance for the controlled network that demonstrate that increasing the strength of the negative feedback can indeed decrease the variance, sometimes even below its constitutive level. Numerical results verify the accuracy of these results and we illustrate them by considering three biomolecular networks with two types of negative feedback strategies. Our computational analysis indicates that there is a trade-off between the speed of the settling-time of the mean trajectories and the stationary variance of the controlled species; i.e. smaller variance is associated with larger settling-time. © 2018 The Author(s).

  9. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  10. The human as a detector of changes in variance and bandwidth

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Govindaraj, T.

    1977-01-01

    The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.

  11. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    PubMed Central

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  12. Generation of uniformly distributed dose points for anatomy-based three-dimensional dose optimization methods in brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N

    2000-05-01

    We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.

  13. Continuous Linguistic Rhetorical Education as a Means of Optimizing Language Policy in Russian Multinational Regions

    ERIC Educational Resources Information Center

    Vorozhbitova, Alexandra A.; Konovalova, Galina M.; Ogneva, Tatiana N.; Chekulaeva, Natalia Y.

    2017-01-01

    Drawing on the function of Russian as a state language the paper proposes a concept of continuous linguistic rhetorical (LR) education perceived as a means of optimizing language policy in Russian multinational regions. LR education as an innovative pedagogical system shapes a learner's readiness for self-projection as a strong linguistic…

  14. Meditations on birth weight: is it better to reduce the variance or increase the mean?

    PubMed

    Haig, David

    2003-07-01

    A conceptual model is presented here in which the birth weight distribution is decomposed into a distribution of target weights and a distribution of perturbations from the target. The target weight is the adaptive goal of fetal development. In the simplest model, perinatal mortality is independent of variation in target weight and determined solely by the magnitude of the perturbation of birth weight from the target. In this model, mortality risk is concentrated in the tails of the birth weight distribution. A difference between populations in their distributions of target weights will be associated with a corresponding shift in their curves of weight-specific risk, without any difference between the populations in overall risk. In this model, risk would be reduced by decreasing the variance of the distribution of perturbations. The model is discussed in the context of the so-called "paradoxes of low birth weight."

  15. Optimization of hole generation in Ti/CFRP stacks

    NASA Astrophysics Data System (ADS)

    Ivanov, Y. N.; Pashkov, A. E.; Chashhin, N. S.

    2018-03-01

    The article aims to describe methods for improving the surface quality and hole accuracy in Ti/CFRP stacks by optimizing cutting methods and drill geometry. The research is based on the fundamentals of machine building, theory of probability, mathematical statistics, and experiment planning and manufacturing process optimization theories. Statistical processing of experiment data was carried out by means of Statistica 6 and Microsoft Excel 2010. Surface geometry in Ti stacks was analyzed using a Taylor Hobson Form Talysurf i200 Series Profilometer, and in CFRP stacks - using a Bruker ContourGT-Kl Optical Microscope. Hole shapes and sizes were analyzed using a Carl Zeiss CONTURA G2 Measuring machine, temperatures in cutting zones were recorded with a FLIR SC7000 Series Infrared Camera. Models of multivariate analysis of variance were developed. They show effects of drilling modes on surface quality and accuracy of holes in Ti/CFRP stacks. The task of multicriteria drilling process optimization was solved. Optimal cutting technologies which improve performance were developed. Methods for assessing thermal tool and material expansion effects on the accuracy of holes in Ti/CFRP/Ti stacks were developed.

  16. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    PubMed

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed

  17. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    PubMed Central

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed

  18. Nonlinear Epigenetic Variance: Review and Simulations

    ERIC Educational Resources Information Center

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  19. Optimized data fusion for K-means Laplacian clustering

    PubMed Central

    Yu, Shi; Liu, Xinhai; Tranchevent, Léon-Charles; Glänzel, Wolfgang; Suykens, Johan A. K.; De Moor, Bart; Moreau, Yves

    2011-01-01

    Motivation: We propose a novel algorithm to combine multiple kernels and Laplacians for clustering analysis. The new algorithm is formulated on a Rayleigh quotient objective function and is solved as a bi-level alternating minimization procedure. Using the proposed algorithm, the coefficients of kernels and Laplacians can be optimized automatically. Results: Three variants of the algorithm are proposed. The performance is systematically validated on two real-life data fusion applications. The proposed Optimized Kernel Laplacian Clustering (OKLC) algorithms perform significantly better than other methods. Moreover, the coefficients of kernels and Laplacians optimized by OKLC show some correlation with the rank of performance of individual data source. Though in our evaluation the K values are predefined, in practical studies, the optimal cluster number can be consistently estimated from the eigenspectrum of the combined kernel Laplacian matrix. Availability: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/oklc.html. Contact: shiyu@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20980271

  20. Structural changes and out-of-sample prediction of realized range-based variance in the stock market

    NASA Astrophysics Data System (ADS)

    Gong, Xu; Lin, Boqiang

    2018-03-01

    This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.

  1. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-07

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Hidden Item Variance in Multiple Mini-Interview Scores

    ERIC Educational Resources Information Center

    Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen

    2017-01-01

    The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…

  3. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  4. Optimization of hybrid iterative reconstruction level in pediatric body CT.

    PubMed

    Karmazyn, Boaz; Liang, Yun; Ai, Huisi; Eckert, George J; Cohen, Mervyn D; Wanner, Matthew R; Jennings, S Gregory

    2014-02-01

    The objective of our study was to attempt to optimize the level of hybrid iterative reconstruction (HIR) in pediatric body CT. One hundred consecutive chest or abdominal CT examinations were selected. For each examination, six series were obtained: one filtered back projection (FBP) and five HIR series (iDose(4)) levels 2-6. Two pediatric radiologists, blinded to noise measurements, independently chose the optimal HIR level and then rated series quality. We measured CT number (mean in Hounsfield units) and noise (SD in Hounsfield units) changes by placing regions of interest in the liver, muscles, subcutaneous fat, and aorta. A mixed-model analysis-of-variance test was used to analyze correlation of noise reduction with the optimal HIR level compared with baseline FBP noise. One hundred CT examinations were performed of 88 patients (52 females and 36 males) with a mean age of 8.5 years (range, 19 days-18 years); 12 patients had both chest and abdominal CT studies. Radiologists agreed to within one level of HIR in 92 of 100 studies. The mean quality rating was significantly higher for HIR than FBP (3.6 vs 3.3, respectively; p < 0.01). HIR caused minimal (0-0.2%) change in CT numbers. Noise reduction varied among structures and patients. Liver noise reduction positively correlated with baseline noise when the optimal HIR level was used (p < 0.01). HIR levels were significantly correlated with body weight and effective diameter of the upper abdomen (p < 0.01). HIR, such as iDose(4), improves the quality of body CT scans of pediatric patients by decreasing noise; HIR level 3 or 4 is optimal for most studies. The optimal HIR level was less effective in reducing liver noise in children with lower baseline noise.

  5. A New Nonparametric Levene Test for Equal Variances

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  6. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    ERIC Educational Resources Information Center

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  7. SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.

    PubMed

    Nik, S J; Thing, R S; Watts, R; Meyer, J

    2012-06-01

    To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations.

  8. The microcomputer scientific software series 3: general linear model--analysis of variance.

    Treesearch

    Harold M. Rauscher

    1985-01-01

    A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...

  9. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  10. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  11. Variance reduction through robust design of boundary conditions for stochastic hyperbolic systems of equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordström, Jan, E-mail: jan.nordstrom@liu.se; Wahlsten, Markus, E-mail: markus.wahlsten@liu.se

    We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for themore » Euler equations.« less

  12. 0-6781 : improved nighttime work zone channelization in confined urban projects.

    DOT National Transportation Integrated Search

    2014-08-01

    Turning into and out of driveways in confined or : dense urban work zones can present significant : challenges to drivers, especially during nighttime : conditions when other visual cues about the : driveways may be masked in the dark. These : challe...

  13. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    PubMed

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  14. Natural variance in pH as a complication in detecting acidification of lakes

    USGS Publications Warehouse

    Turk, J.T.

    1988-01-01

    Natural variance in the pH of three dilute lakes in the Flat Tops Wilderness Area, Colorado, complicates the detection of acidification. Variations in pH during July-September of 1983 were: 0.95 (Ned Wilson Lake), 1.36 (Upper Island Lake), and 1.53 (Oyster Lake). Mean diurnal variations in pH during 1983 were: 0.37 (Ned Wilson Lake), 0.54 (Upper Island Lake), and 0.39 (Oyster Lake). Replicate pH measurements indicate that pH can be measured with a mean variance due to measurement error of ?? 0.005. Regression analysis indicates that samples collected on the same day of different years may differ because of time of day and percentage of cloud cover. Differences in wind duration and intensity and primary productivity also may cause the pH to differ between years. Such differences can be either random or systematic. Comparisons of pH among 3 yr of data from Ned Wilson Lake indicate that natural variations in pH are much larger than variations in Colorado Lakes previously attributed to acidification by precipitation.

  15. Decomposing variation in male reproductive success: age-specific variances and covariances through extra-pair and within-pair reproduction.

    PubMed

    Lebigre, Christophe; Arcese, Peter; Reid, Jane M

    2013-07-01

    Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased

  16. Analytic variance estimates of Swank and Fano factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less

  17. Macroscopic relationship in primal-dual portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-02-01

    In the present paper, using a replica analysis, we examine the portfolio optimization problem handled in previous work and discuss the minimization of investment risk under constraints of budget and expected return for the case that the distribution of the hyperparameters of the mean and variance of the return rate of each asset are not limited to a specific probability family. Findings derived using our proposed method are compared with those in previous work to verify the effectiveness of our proposed method. Further, we derive a Pythagorean theorem of the Sharpe ratio and macroscopic relations of opportunity loss. Using numerical experiments, the effectiveness of our proposed method is demonstrated for a specific situation.

  18. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  19. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR... risk and will not be modified by the granting of a variance. The community, after examining the... review a community's findings justifying the granting of variances, and if that review indicates a...

  20. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR... risk and will not be modified by the granting of a variance. The community, after examining the... review a community's findings justifying the granting of variances, and if that review indicates a...

  1. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part I: main content.

    PubMed

    Orellana, Liliana; Rotnitzky, Andrea; Robins, James M

    2010-01-01

    Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.

  2. Optimal recall period for caregiver-reported illness in risk factor and intervention studies: a multicountry study.

    PubMed

    Arnold, Benjamin F; Galiani, Sebastian; Ram, Pavani K; Hubbard, Alan E; Briceño, Bertha; Gertler, Paul J; Colford, John M

    2013-02-15

    Many community-based studies of acute child illness rely on cases reported by caregivers. In prior investigations, researchers noted a reporting bias when longer illness recall periods were used. The use of recall periods longer than 2-3 days has been discouraged to minimize this reporting bias. In the present study, we sought to determine the optimal recall period for illness measurement when accounting for both bias and variance. Using data from 12,191 children less than 24 months of age collected in 2008-2009 from Himachal Pradesh in India, Madhya Pradesh in India, Indonesia, Peru, and Senegal, we calculated bias, variance, and mean squared error for estimates of the prevalence ratio between groups defined by anemia, stunting, and underweight status to identify optimal recall periods for caregiver-reported diarrhea, cough, and fever. There was little bias in the prevalence ratio when a 7-day recall period was used (<10% in 35 of 45 scenarios), and the mean squared error was usually minimized with recall periods of 6 or more days. Shortening the recall period from 7 days to 2 days required sample-size increases of 52%-92% for diarrhea, 47%-61% for cough, and 102%-206% for fever. In contrast to the current practice of using 2-day recall periods, this work suggests that studies should measure caregiver-reported illness with a 7-day recall period.

  3. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  4. Finding reproducible cluster partitions for the k-means algorithm

    PubMed Central

    2013-01-01

    K-means clustering is widely used for exploratory data analysis. While its dependence on initialisation is well-known, it is common practice to assume that the partition with lowest sum-of-squares (SSQ) total i.e. within cluster variance, is both reproducible under repeated initialisations and also the closest that k-means can provide to true structure, when applied to synthetic data. We show that this is generally the case for small numbers of clusters, but for values of k that are still of theoretical and practical interest, similar values of SSQ can correspond to markedly different cluster partitions. This paper extends stability measures previously presented in the context of finding optimal values of cluster number, into a component of a 2-d map of the local minima found by the k-means algorithm, from which not only can values of k be identified for further analysis but, more importantly, it is made clear whether the best SSQ is a suitable solution or whether obtaining a consistently good partition requires further application of the stability index. The proposed method is illustrated by application to five synthetic datasets replicating a real world breast cancer dataset with varying data density, and a large bioinformatics dataset. PMID:23369085

  5. Finding reproducible cluster partitions for the k-means algorithm.

    PubMed

    Lisboa, Paulo J G; Etchells, Terence A; Jarman, Ian H; Chambers, Simon J

    2013-01-01

    K-means clustering is widely used for exploratory data analysis. While its dependence on initialisation is well-known, it is common practice to assume that the partition with lowest sum-of-squares (SSQ) total i.e. within cluster variance, is both reproducible under repeated initialisations and also the closest that k-means can provide to true structure, when applied to synthetic data. We show that this is generally the case for small numbers of clusters, but for values of k that are still of theoretical and practical interest, similar values of SSQ can correspond to markedly different cluster partitions. This paper extends stability measures previously presented in the context of finding optimal values of cluster number, into a component of a 2-d map of the local minima found by the k-means algorithm, from which not only can values of k be identified for further analysis but, more importantly, it is made clear whether the best SSQ is a suitable solution or whether obtaining a consistently good partition requires further application of the stability index. The proposed method is illustrated by application to five synthetic datasets replicating a real world breast cancer dataset with varying data density, and a large bioinformatics dataset.

  6. Affix Meaning Knowledge in First Through Third Grade Students.

    PubMed

    Apel, Kenn; Henbest, Victoria Suzanne

    2016-04-01

    We examined grade-level differences in 1st- through 3rd-grade students' performance on an experimenter-developed affix meaning task (AMT) and determined whether AMT performance explained unique variance in word-level reading and reading comprehension, beyond other known contributors to reading development. Forty students at each grade level completed an assessment battery that included measures of phonological awareness, receptive vocabulary, word-level reading, reading comprehension, and affix meaning knowledge. On the AMT, 1st-grade students were significantly less accurate than 2nd- and 3rd-grade students; there was no significant difference in performance between the 2nd- and 3rd-grade students. Regression analyses revealed that the AMT accounted for 8% unique variance of students' performance on word-level reading measures and 6% unique variance of students' performance on the reading comprehension measure, after age, phonological awareness, and receptive vocabulary were explained. These results provide initial information on the development of affix meaning knowledge via an explicit measure in 1st- through 3rd-grade students and demonstrate that affix meaning knowledge uniquely contributes to the development of reading abilities above other known literacy predictors. These findings provide empirical support for how students might use morphological problem solving to read unknown multimorphemic words successfully.

  7. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... is unable to meet the time requirements for which the variance is requested; and (2) A revised UR...

  8. Raising the speed limit from 75 to 80mph on Utah rural interstates: Effects on vehicle speeds and speed variance.

    PubMed

    Hu, Wen

    2017-06-01

    In November 2010 and October 2013, Utah increased speed limits on sections of rural interstates from 75 to 80mph. Effects on vehicle speeds and speed variance were examined. Speeds were measured in May 2010 and May 2014 within the new 80mph zones, and at a nearby spillover site and at more distant control sites where speed limits remained 75mph. Log-linear regression models estimated percentage changes in speed variance and mean speeds for passenger vehicles and large trucks associated with the speed limit increase. Logistic regression models estimated effects on the probability of passenger vehicles exceeding 80, 85, or 90mph and large trucks exceeding 80mph. Within the 80mph zones and at the spillover location in 2014, mean passenger vehicle speeds were significantly higher (4.1% and 3.5%, respectively), as were the probabilities that passenger vehicles exceeded 80mph (122.3% and 88.5%, respectively), than would have been expected without the speed limit increase. Probabilities that passenger vehicles exceeded 85 and 90mph were non-significantly higher than expected within the 80mph zones. For large trucks, the mean speed and probability of exceeding 80mph were higher than expected within the 80mph zones. Only the increase in mean speed was significant. Raising the speed limit was associated with non-significant increases in speed variance. The study adds to the wealth of evidence that increasing speed limits leads to higher travel speeds and an increased probability of exceeding the new speed limit. Results moreover contradict the claim that increasing speed limits reduces speed variance. Although the estimated increases in mean vehicle speeds may appear modest, prior research suggests such increases would be associated with substantial increases in fatal or injury crashes. This should be considered by lawmakers considering increasing speed limits. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  9. One-shot estimate of MRMC variance: AUC.

    PubMed

    Gallas, Brandon D

    2006-03-01

    One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.

  10. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part II: proofs of results.

    PubMed

    Orellana, Liliana; Rotnitzky, Andrea; Robins, James M

    2010-03-03

    In this companion article to "Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content" [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

  11. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...

  12. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...

  13. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...

  14. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  15. Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata

    PubMed Central

    Sztepanacz, Jacqueline L.; Blows, Mark W.

    2015-01-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700

  16. CMB-S4 and the hemispherical variance anomaly

    NASA Astrophysics Data System (ADS)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  17. Modeling Heterogeneous Variance-Covariance Components in Two-Level Models

    ERIC Educational Resources Information Center

    Leckie, George; French, Robert; Charlton, Chris; Browne, William

    2014-01-01

    Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…

  18. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-06-27

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.

  19. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  20. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  1. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE PAGES

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    2017-04-06

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  2. Teleportation of squeezing: Optimization using non-Gaussian resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell'Anno, Fabio; De Siena, Silvio; Illuminati, Fabrizio

    2010-12-15

    We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell'Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. A 76, 022301 (2007); F. Dell'Anno, S. Demore » Siena, and F. Illuminati, ibid. 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.« less

  3. On optimal current patterns for electrical impedance tomography.

    PubMed

    Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D

    2005-02-01

    We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.

  4. Some variance reduction methods for numerical stochastic homogenization

    PubMed Central

    Blanc, X.; Le Bris, C.; Legoll, F.

    2016-01-01

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  5. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2012-12-01

    Based on rainfall intensity-duration-frequency (IDF) curves, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimisation can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short and a long term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum

  6. A study of optimization techniques in HDR brachytherapy for the prostate

    NASA Astrophysics Data System (ADS)

    Pokharel, Ghana Shyam

    Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives

  7. Bootstrap Estimation and Testing for Variance Equality.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Algina, James

    The purpose of this study was to develop a single procedure for comparing population variances which could be used for distribution forms. Bootstrap methodology was used to estimate the variability of the sample variance statistic when the population distribution was normal, platykurtic and leptokurtic. The data for the study were generated and…

  8. 44 CFR 60.6 - Variances and exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR LAND MANAGEMENT AND USE Requirements for Flood Plain Management Regulations § 60.6 Variances and exceptions. (a... the criteria set forth in §§ 60.3, 60.4, and 60.5. The issuance of a variance is for flood plain...

  9. What Do Differences Between Multi-voxel and Univariate Analysis Mean? How Subject-, Voxel-, and Trial-level Variance Impact fMRI Analysis

    PubMed Central

    Davis, Tyler; LaRocque, Karen F.; Mumford, Jeanette; Norman, Kenneth A.; Wagner, Anthony D.; Poldrack, Russell A.

    2014-01-01

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. PMID:24768930

  10. Neuroticism explains unwanted variance in Implicit Association Tests of personality: possible evidence for an affective valence confound.

    PubMed

    Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja

    2013-01-01

    Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.

  11. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.

  12. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste...

  13. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste...

  14. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  15. Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part II: Proofs of Results*

    PubMed Central

    Orellana, Liliana; Rotnitzky, Andrea; Robins, James M.

    2010-01-01

    In this companion article to “Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content” [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption. PMID:20405047

  16. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  17. Gap-filling methods to impute eddy covariance flux data by preserving variance.

    NASA Astrophysics Data System (ADS)

    Kunwor, S.; Staudhammer, C. L.; Starr, G.; Loescher, H. W.

    2015-12-01

    To represent carbon dynamics, in terms of exchange of CO2 between the terrestrial ecosystem and the atmosphere, eddy covariance (EC) data has been collected using eddy flux towers from various sites across globe for more than two decades. However, measurements from EC data are missing for various reasons: precipitation, routine maintenance, or lack of vertical turbulence. In order to have estimates of net ecosystem exchange of carbon dioxide (NEE) with high precision and accuracy, robust gap-filling methods to impute missing data are required. While the methods used so far have provided robust estimates of the mean value of NEE, little attention has been paid to preserving the variance structures embodied by the flux data. Preserving the variance of these data will provide unbiased and precise estimates of NEE over time, which mimic natural fluctuations. We used a non-linear regression approach with moving windows of different lengths (15, 30, and 60-days) to estimate non-linear regression parameters for one year of flux data from a long-leaf pine site at the Joseph Jones Ecological Research Center. We used as our base the Michaelis-Menten and Van't Hoff functions. We assessed the potential physiological drivers of these parameters with linear models using micrometeorological predictors. We then used a parameter prediction approach to refine the non-linear gap-filling equations based on micrometeorological conditions. This provides us an opportunity to incorporate additional variables, such as vapor pressure deficit (VPD) and volumetric water content (VWC) into the equations. Our preliminary results indicate that improvements in gap-filling can be gained with a 30-day moving window with additional micrometeorological predictors (as indicated by lower root mean square error (RMSE) of the predicted values of NEE). Our next steps are to use these parameter predictions from moving windows to gap-fill the data with and without incorporation of potential driver variables

  18. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  19. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  20. Variance Function Regression in Hierarchical Age-Period-Cohort Models: Applications to the Study of Self-Reported Health

    PubMed Central

    Zheng, Hui; Yang, Yang; Land, Kenneth C.

    2012-01-01

    Two long-standing research problems of interest to sociologists are sources of variations in social inequalities and differential contributions of the temporal dimensions of age, time period, and cohort to variations in social phenomena. Recently, scholars have introduced a model called Variance Function Regression for the study of the former problem, and a model called Hierarchical Age-Period-Cohort regression has been developed for the study of the latter. This article presents an integration of these two models as a means to study the evolution of social inequalities along distinct temporal dimensions. We apply the integrated model to survey data on subjective health status. We find substantial age, period, and cohort effects, as well as gender differences, not only for the conditional mean of self-rated health (i.e., between-group disparities), but also for the variance in this mean (i.e., within-group disparities)—and it is detection of age, period, and cohort variations in the latter disparities that application of the integrated model permits. Net of effects of age and individual-level covariates, in recent decades, cohort differences in conditional means of self-rated health have been less important than period differences that cut across all cohorts. By contrast, cohort differences of variances in these conditional means have dominated period differences. In particular, post-baby boom birth cohorts show significant and increasing levels of within-group disparities. These findings illustrate how the integrated model provides a powerful framework through which to identify and study the evolution of variations in social inequalities across age, period, and cohort temporal dimensions. Accordingly, this model should be broadly applicable to the study of social inequality in many different substantive contexts. PMID:22904570

  1. Minimum Variance Distortionless Response Beamformer with Enhanced Nulling Level Control via Dynamic Mutated Artificial Immune System

    PubMed Central

    Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136

  2. Minimum variance distortionless response beamformer with enhanced nulling level control via dynamic mutated artificial immune system.

    PubMed

    Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.

  3. Save money by understanding variance and tolerancing.

    PubMed

    Stuart, K

    2007-01-01

    Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.

  4. Conceptual Complexity and the Bias/Variance Tradeoff

    ERIC Educational Resources Information Center

    Briscoe, Erica; Feldman, Jacob

    2011-01-01

    In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…

  5. Prediction of Tibial Rotation Pathologies Using Particle Swarm Optimization and K-Means Algorithms.

    PubMed

    Sari, Murat; Tuna, Can; Akogul, Serkan

    2018-03-28

    The aim of this article is to investigate pathological subjects from a population through different physical factors. To achieve this, particle swarm optimization (PSO) and K-means (KM) clustering algorithms have been combined (PSO-KM). Datasets provided by the literature were divided into three clusters based on age and weight parameters and each one of right tibial external rotation (RTER), right tibial internal rotation (RTIR), left tibial external rotation (LTER), and left tibial internal rotation (LTIR) values were divided into three types as Type 1, Type 2 and Type 3 (Type 2 is non-pathological (normal) and the other two types are pathological (abnormal)), respectively. The rotation values of every subject in any cluster were noted. Then the algorithm was run and the produced values were also considered. The values of the produced algorithm, the PSO-KM, have been compared with the real values. The hybrid PSO-KM algorithm has been very successful on the optimal clustering of the tibial rotation types through the physical criteria. In this investigation, Type 2 (pathological subjects) is of especially high predictability and the PSO-KM algorithm has been very successful as an operation system for clustering and optimizing the tibial motion data assessments. These research findings are expected to be very useful for health providers, such as physiotherapists, orthopedists, and so on, in which this consequence may help clinicians to appropriately designing proper treatment schedules for patients.

  6. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  7. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  8. Maximizing the reliability of genomic selection by optimizing the calibration set of reference individuals: comparison of methods in two diverse groups of maize inbreds (Zea mays L.).

    PubMed

    Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L

    2012-10-01

    Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.

  9. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Mean-field games for marriage.

    PubMed

    Bauso, Dario; Dia, Ben Mansour; Djehiche, Boualem; Tembine, Hamidou; Tempone, Raul

    2014-01-01

    This article examines mean-field games for marriage. The results support the argument that optimizing the long-term well-being through effort and social feeling state distribution (mean-field) will help to stabilize marriage. However, if the cost of effort is very high, the couple fluctuates in a bad feeling state or the marriage breaks down. We then examine the influence of society on a couple using mean-field sentimental games. We show that, in mean-field equilibrium, the optimal effort is always higher than the one-shot optimal effort. We illustrate numerically the influence of the couple's network on their feeling states and their well-being.

  11. Mean-Field Games for Marriage

    PubMed Central

    Bauso, Dario; Dia, Ben Mansour; Djehiche, Boualem; Tembine, Hamidou; Tempone, Raul

    2014-01-01

    This article examines mean-field games for marriage. The results support the argument that optimizing the long-term well-being through effort and social feeling state distribution (mean-field) will help to stabilize marriage. However, if the cost of effort is very high, the couple fluctuates in a bad feeling state or the marriage breaks down. We then examine the influence of society on a couple using mean-field sentimental games. We show that, in mean-field equilibrium, the optimal effort is always higher than the one-shot optimal effort. We illustrate numerically the influence of the couple’s network on their feeling states and their well-being. PMID:24804835

  12. Cosmic variance in inflation with two light scalars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie

    We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light ( m ∼< 0.1 H ), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always hasmore » the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.« less

  13. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  14. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  15. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    PubMed

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  16. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION

    PubMed Central

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    2016-01-01

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864

  17. Variance stabilization and normalization for one-color microarray data using a data-driven multiscale approach.

    PubMed

    Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A

    2006-10-15

    Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.

  18. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  19. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2013-10-01

    Based on rainfall intensity-duration-frequency (IDF) curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological

  20. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    NASA Astrophysics Data System (ADS)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  1. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater

    PubMed Central

    Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal

    2016-01-01

    Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016

  2. Diversity-optimal power loading for intensity modulated MIMO optical wireless communications.

    PubMed

    Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun

    2016-04-18

    In this paper, we consider the design of space code for an intensity modulated direct detection multi-input-multi-output optical wireless communication (IM/DD MIMO-OWC) system, in which channel coefficients are independent and non-identically log-normal distributed, with variances and means known at the transmitter and channel state information available at the receiver. Utilizing the existing space code design criterion for IM/DD MIMO-OWC with a maximum likelihood (ML) detector, we design a diversity-optimal space code (DOSC) that maximizes both large-scale diversity and small-scale diversity gains and prove that the spatial repetition code (RC) with a diversity-optimized power allocation is diversity-optimal among all the high dimensional nonnegative space code schemes under a commonly used optical power constraint. In addition, we show that one of significant advantages of the DOSC is to allow low-complexity ML detection. Simulation results indicate that in high signal-to-noise ratio (SNR) regimes, our proposed DOSC significantly outperforms RC, which is the best space code currently available for such system.

  3. Teleportation of squeezing: Optimization using non-Gaussian resources

    NASA Astrophysics Data System (ADS)

    Dell'Anno, Fabio; de Siena, Silvio; Adesso, Gerardo; Illuminati, Fabrizio

    2010-12-01

    We study the continuous-variable quantum teleportation of states, statistical moments of observables, and scale parameters such as squeezing. We investigate the problem both in ideal and imperfect Vaidman-Braunstein-Kimble protocol setups. We show how the teleportation fidelity is maximized and the difference between output and input variances is minimized by using suitably optimized entangled resources. Specifically, we consider the teleportation of coherent squeezed states, exploiting squeezed Bell states as entangled resources. This class of non-Gaussian states, introduced by Illuminati and co-workers [F. Dell’Anno, S. De Siena, L. Albano, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.76.022301 76, 022301 (2007); F. Dell’Anno, S. De Siena, and F. Illuminati, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.81.012333 81, 012333 (2010)], includes photon-added and photon-subtracted squeezed states as special cases. At variance with the case of entangled Gaussian resources, the use of entangled non-Gaussian squeezed Bell resources allows one to choose different optimization procedures that lead to inequivalent results. Performing two independent optimization procedures, one can either maximize the state teleportation fidelity, or minimize the difference between input and output quadrature variances. The two different procedures are compared depending on the degrees of displacement and squeezing of the input states and on the working conditions in ideal and nonideal setups.

  4. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  5. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time...

  6. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time...

  7. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time...

  8. Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.

  9. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

    USGS Publications Warehouse

    Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

    2011-01-01

    Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this

  10. Naive Analysis of Variance

    ERIC Educational Resources Information Center

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  11. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  12. Sampling Variances and Covariances of Parameter Estimates in Item Response Theory.

    DTIC Science & Technology

    1982-08-01

    substituting (15) into (16) and solving for k and K k = b b1 - o K , (17)k where b and b are means for m and r items, respectively. To find the variance...C5 , and C12 were treated as known. We find that the standard errors of B1 to B5 are increased drastically by ignorance of C 1 to C5 ; all...ERIC Facilltv-Acquisitlons Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC 27514 Bethesda, MD 20014 -7- Dr. A. J. Eschenbrenner 1 Dr. John R

  13. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  14. Design of off-statistics axial-flow fans by means of vortex law optimization

    NASA Astrophysics Data System (ADS)

    Lazari, Andrea; Cattanei, Andrea

    2014-12-01

    Off-statistics input data sets are common in axial-flow fans design and may easily result in some violation of the requirements of a good aerodynamic blade design. In order to circumvent this problem, in the present paper, a solution to the radial equilibrium equation is found which minimizes the outlet kinetic energy and fulfills the aerodynamic constraints, thus ensuring that the resulting blade has acceptable aerodynamic performance. The presented method is based on the optimization of a three-parameters vortex law and of the meridional channel size. The aerodynamic quantities to be employed as constraints are individuated and their suitable ranges of variation are proposed. The method is validated by means of a design with critical input data values and CFD analysis. Then, by means of systematic computations with different input data sets, some correlations and charts are obtained which are analogous to classic correlations based on statistical investigations on existing machines. Such new correlations help size a fan of given characteristics as well as study the feasibility of a given design.

  15. Variance in binary stellar population synthesis

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  16. 10 CFR 851.32 - Action on variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... cited by the Chief Health, Safety and Security Officer; or (ii) Forward to the Under Secretary the... DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.32 Action on variance requests. (a) Procedures for an approval recommendation. (1) If the Chief Health, Safety and Security Officer recommends...

  17. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy.

    PubMed

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  18. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    PubMed Central

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274

  19. Conditional Optimal Design in Three- and Four-Level Experiments

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Borenstein, Michael

    2014-01-01

    The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…

  20. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  1. Jackknife for Variance Analysis of Multifactor Experiments.

    DTIC Science & Technology

    1982-05-01

    variance-covariance matrix is generated y a subroutine named CORAN (UNIVAC, 1969). The jackknife variances are then punched on computer cards in the same...LEVEL OF: InMte CALL cORAN (oaILa.NSUR.NOAY.D,*OXflRRORR.PCOF.2K.1’)I WRITE IP97111 )1RRN.4 .1:NDAY) 0 a 3fill1UR I .’t UN 001f’..1uŔ:1 .w100710n

  2. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  3. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    NASA Astrophysics Data System (ADS)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  4. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  5. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  6. Model-based variance-stabilizing transformation for Illumina microarray data.

    PubMed

    Lin, Simon M; Du, Pan; Huber, Wolfgang; Kibbe, Warren A

    2008-02-01

    Variance stabilization is a step in the preprocessing of microarray data that can greatly benefit the performance of subsequent statistical modeling and inference. Due to the often limited number of technical replicates for Affymetrix and cDNA arrays, achieving variance stabilization can be difficult. Although the Illumina microarray platform provides a larger number of technical replicates on each array (usually over 30 randomly distributed beads per probe), these replicates have not been leveraged in the current log2 data transformation process. We devised a variance-stabilizing transformation (VST) method that takes advantage of the technical replicates available on an Illumina microarray. We have compared VST with log2 and Variance-stabilizing normalization (VSN) by using the Kruglyak bead-level data (2006) and Barnes titration data (2005). The results of the Kruglyak data suggest that VST stabilizes variances of bead-replicates within an array. The results of the Barnes data show that VST can improve the detection of differentially expressed genes and reduce false-positive identifications. We conclude that although both VST and VSN are built upon the same model of measurement noise, VST stabilizes the variance better and more efficiently for the Illumina platform by leveraging the availability of a larger number of within-array replicates. The algorithms and Supplementary Data are included in the lumi package of Bioconductor, available at: www.bioconductor.org.

  7. Optimal cue integration in ants.

    PubMed

    Wystrach, Antoine; Mangan, Michael; Webb, Barbara

    2015-10-07

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy. © 2015 The Author(s).

  8. Investigation of Allan variance for determining noise spectral forms with application to microwave radiometry

    NASA Technical Reports Server (NTRS)

    Stanley, William D.

    1994-01-01

    An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.

  9. Business owners' optimism and business performance after a natural disaster.

    PubMed

    Bronson, James W; Faircloth, James B; Valentine, Sean R

    2006-12-01

    Previous work indicates that individuals' optimism is related to superior performance in adverse situations. This study examined correlations after flooding for measures of business recovery but found only weak support (very small common variance) for business owners' optimism scores and sales recovery. Using traditional measures of recovery, in this study was little empirical evidence that optimism would be of value in identifying businesses at risk after a natural disaster.

  10. Estimation of (co)variances for genomic regions of flexible sizes: application to complex infectious udder diseases in dairy cattle

    PubMed Central

    2012-01-01

    Background Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related traits such as mammary disease traits in dairy cattle. Methods Data on progeny means of six traits related to mastitis resistance in dairy cattle (general mastitis resistance and five pathogen-specific mastitis resistance traits) were analyzed using a bivariate Bayesian SNP-based genomic model with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level, per chromosome, and in regions of 100 SNP on a chromosome. Results Genomic proportions of the total variance differed between traits. Genomic correlations were lower than pedigree-based genetic correlations and they were highest between general mastitis and pathogen-specific traits because of the part-whole relationship between these traits. The chromosome-wise genomic proportions of the total variance differed between traits, with some chromosomes explaining higher or lower values than expected in relation to chromosome size. Few chromosomes showed pleiotropic effects and only chromosome 19 had a clear effect on all traits, indicating the presence of QTL with a general effect on mastitis resistance. The region-wise patterns of genomic variances differed between traits. Peaks indicating QTL were identified but were not very distinctive because a common prior for the marker effects was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. Conclusions

  11. Optimism, well-being, depressive symptoms, and perceived physical health: a study among Stroke survivors.

    PubMed

    Shifren, Kim; Anzaldi, Kristen

    2018-01-01

    The investigation of the relation of positive personality characteristics to mental and physical health among Stroke survivors has been a neglected area of research. The purpose of this study was to examine the relationship between optimism, well-being, depressive symptoms, and perceived physical health among Stroke survivors. It was hypothesized that Stroke survivors' optimism would explain variance in their physical health above and beyond the variance explained by demographic variables, diagnostic variables, and mental health. One hundred seventy-six Stroke survivors (97 females, 79 males) completed the Revised Life Orientation Test, the Center for Epidemiological Studies Depression Scale, two items on perceived physical health from the 36-item Short Form of the Medical Outcomes study, and the Identity scale of the Illness Perception Questionnaire. Pearson correlations, hierarchical regression analyses, and the PROCESS approach to determining mediators were used to assess hypothesized relations between variables. Stroke survivors' level of optimism explained additional variance in overall health in regression models controlling for demographic and diagnostic variables, and mental health. Analyses revealed that optimism played a partial mediator role between mental health (well-being, depressive symptoms and total score on CES-D) variables and overall health.

  12. A proxy for variance in dense matching over homogeneous terrain

    NASA Astrophysics Data System (ADS)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    variance in intensity, the topography was reconstructed entirely. This indicates that to a large extent interpolation was applied. To assess this amount of interpolation processing is done with imagery which is gradually downgraded. Through linking these products with the variance indicator (SNR) this results in a quantitative relation of the interpolation influence onto the topography estimation in respect to contrast. Our proposed method is capable of providing a clear indication of variance in reconstructions from UAV photogrammetry. This indicator has a practical advantage, as it can be implemented before the computational intensive matching phase. As such an acquired dataset can be tested in the field. If an area with too little contrast is identified, camera settings can be adjusted for a new flight, or additional measurements can be done through traditional means.

  13. Control Variates and Optimal Designs in Metamodeling

    DTIC Science & Technology

    2013-03-01

    27 2.4.5 Selection of Control Variates for Inclusion in Model...meet the normality assumption (Nelson 1990, Nelson and Yang 1992, Anonuevo and Nelson 1988). Jacknifing, splitting, and bootstrapping can be used to...freedom to estimate the variance are lost due to being used for the control variate inclusion . This means the variance reduction achieved must now be

  14. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  15. Small Drinking Water System Variances

    EPA Pesticide Factsheets

    Small system variances allow a small system to install and maintain technology that can remove a contaminant to the maximum extent that is affordable and protective of public health in lieu of technology that can achieve compliance with the regulation.

  16. Correcting for Blood Arrival Time in Global Mean Regression Enhances Functional Connectivity Analysis of Resting State fMRI-BOLD Signals.

    PubMed

    Erdoğan, Sinem B; Tong, Yunjie; Hocke, Lia M; Lindsey, Kimberly P; deB Frederick, Blaise

    2016-01-01

    Resting state functional connectivity analysis is a widely used method for mapping intrinsic functional organization of the brain. Global signal regression (GSR) is commonly employed for removing systemic global variance from resting state BOLD-fMRI data; however, recent studies have demonstrated that GSR may introduce spurious negative correlations within and between functional networks, calling into question the meaning of anticorrelations reported between some networks. In the present study, we propose that global signal from resting state fMRI is composed primarily of systemic low frequency oscillations (sLFOs) that propagate with cerebral blood circulation throughout the brain. We introduce a novel systemic noise removal strategy for resting state fMRI data, "dynamic global signal regression" (dGSR), which applies a voxel-specific optimal time delay to the global signal prior to regression from voxel-wise time series. We test our hypothesis on two functional systems that are suggested to be intrinsically organized into anticorrelated networks: the default mode network (DMN) and task positive network (TPN). We evaluate the efficacy of dGSR and compare its performance with the conventional "static" global regression (sGSR) method in terms of (i) explaining systemic variance in the data and (ii) enhancing specificity and sensitivity of functional connectivity measures. dGSR increases the amount of BOLD signal variance being modeled and removed relative to sGSR while reducing spurious negative correlations introduced in reference regions by sGSR, and attenuating inflated positive connectivity measures. We conclude that incorporating time delay information for sLFOs into global noise removal strategies is of crucial importance for optimal noise removal from resting state functional connectivity maps.

  17. Optimization of multi-stage dynamic treatment regimes utilizing accumulated data.

    PubMed

    Huang, Xuelin; Choi, Sangbum; Wang, Lu; Thall, Peter F

    2015-11-20

    In medical therapies involving multiple stages, a physician's choice of a subject's treatment at each stage depends on the subject's history of previous treatments and outcomes. The sequence of decisions is known as a dynamic treatment regime or treatment policy. We consider dynamic treatment regimes in settings where each subject's final outcome can be defined as the sum of longitudinally observed values, each corresponding to a stage of the regime. Q-learning, which is a backward induction method, is used to first optimize the last stage treatment then sequentially optimize each previous stage treatment until the first stage treatment is optimized. During this process, model-based expectations of outcomes of late stages are used in the optimization of earlier stages. When the outcome models are misspecified, bias can accumulate from stage to stage and become severe, especially when the number of treatment stages is large. We demonstrate that a modification of standard Q-learning can help reduce the accumulated bias. We provide a computational algorithm, estimators, and closed-form variance formulas. Simulation studies show that the modified Q-learning method has a higher probability of identifying the optimal treatment regime even in settings with misspecified models for outcomes. It is applied to identify optimal treatment regimes in a study for advanced prostate cancer and to estimate and compare the final mean rewards of all the possible discrete two-stage treatment sequences. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Optimized doppler optical coherence tomography for choroidal capillary vasculature imaging

    NASA Astrophysics Data System (ADS)

    Liu, Gangjun; Qi, Wenjuan; Yu, Lingfeng; Chen, Zhongping

    2011-03-01

    In this paper, we analyzed the retinal and choroidal blood vasculature in the posterior segment of the human eye with optimized color Doppler and Doppler variance optical coherence tomography. Depth-resolved structure, color Doppler and Doppler variance images were compared. Blood vessels down to capillary level were able to be obtained with the optimized optical coherence color Doppler and Doppler variance method. For in-vivo imaging of human eyes, bulkmotion induced bulk phase must be identified and removed before using color Doppler method. It was found that the Doppler variance method is not sensitive to bulk motion and the method can be used without removing the bulk phase. A novel, simple and fast segmentation algorithm to indentify retinal pigment epithelium (RPE) was proposed and used to segment the retinal and choroidal layer. The algorithm was based on the detected OCT signal intensity difference between different layers. A spectrometer-based Fourier domain OCT system with a central wavelength of 890 nm and bandwidth of 150nm was used in this study. The 3-dimensional imaging volume contained 120 sequential two dimensional images with 2048 A-lines per image. The total imaging time was 12 seconds and the imaging area was 5x5 mm2.

  19. The evolution and consequences of sex-specific reproductive variance.

    PubMed

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.

  20. The Evolution and Consequences of Sex-Specific Reproductive Variance

    PubMed Central

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130

  1. Why "suboptimal" is optimal: Jensen's inequality and ectotherm thermal preferences.

    PubMed

    Martin, Tara Laine; Huey, Raymond B

    2008-03-01

    Body temperature (T(b)) profoundly affects the fitness of ectotherms. Many ectotherms use behavior to control T(b) within narrow levels. These temperatures are assumed to be optimal and therefore to match body temperatures (Trmax) that maximize fitness (r). We develop an optimality model and find that optimal body temperature (T(o)) should not be centered at Trmax but shifted to a lower temperature. This finding seems paradoxical but results from two considerations relating to Jensen's inequality, which deals with how variance and skew influence integrals of nonlinear functions. First, ectotherms are not perfect thermoregulators and so experience a range of T(b). Second, temperature-fitness curves are asymmetric, such that a T(b) higher than Trmax depresses fitness more than will a T(b) displaced an equivalent amount below Trmax. Our model makes several predictions. The magnitude of the optimal shift (Trmax - To) should increase with the degree of asymmetry of temperature-fitness curves and with T(b) variance. Deviations should be relatively large for thermal specialists but insensitive to whether fitness increases with Trmax ("hotter is better"). Asymmetric (left-skewed) T(b) distributions reduce the magnitude of the optimal shift but do not eliminate it. Comparative data (insects, lizards) support key predictions. Thus, "suboptimal" is optimal.

  2. Difference optimization: Automatic correction of relative frequency and phase for mean non-edited and edited GABA 1H MEGA-PRESS spectra

    NASA Astrophysics Data System (ADS)

    Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R.

    2017-06-01

    Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1H MEGA-PRESS, misalignment between mean edited (ON ‾) and non-edited (OFF ‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON ‾ and OFF ‾ 1H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3 T. The results of the alignment between the mean OFF ‾ and ON ‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF ‾ spectra of Rbd = 0.997 ± 0.003 (method (b) vs. (d)), compared to Rad = 0.764 ± 0.220 (method (a) vs

  3. Difference optimization: Automatic correction of relative frequency and phase for mean non-edited and edited GABA 1H MEGA-PRESS spectra.

    PubMed

    Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R

    2017-06-01

    Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1 H MEGA-PRESS, misalignment between mean edited (ON‾) and non-edited (OFF‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON‾ and OFF‾ 1 H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L 1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L 2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3T. The results of the alignment between the mean OFF‾ and ON‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF‾ spectra of R bd =0.997±0.003 (method (b) vs. (d)), compared to R ad =0.764±0.220 (method (a) vs. (d

  4. The Genealogical Consequences of Fecundity Variance Polymorphism

    PubMed Central

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  5. Recombination and genetic variance among maize doubled haploids induced from F1 and F2 plants.

    PubMed

    Sleper, Joshua A; Bernardo, Rex

    2016-12-01

    Inducing maize doubled haploids from F 2 plants (DHF2) instead of F 1 plants (DHF1) led to more recombination events. However, the best DHF2 lines did not outperform the best DHF1 lines. Maize (Zea mays L.) breeders rely on doubled haploid (DH) technology for fast and efficient production of inbreds. Breeders can induce DH lines most quickly from F 1 plants (DHF1), or induce DH lines from F 2 plants (DHF2) to allow selection prior to DH induction and have more recombinations. Our objective was to determine if the additional recombinations in maize DHF2 lines lead to a larger genetic variance and a superior mean of the best lines. A total of 311 DHF1 and 241 DHF2 lines, derived from the same biparental cross, were crossed to two testers and evaluated in multilocation trials in Europe and the US. The mean number of recombinations per genome was 14.48 among the DHF1 lines and 21.38 among the DHF1 lines. The means of the DHF1 and DHF2 lines did not differ for yield, moisture, and plant height. The genetic variance was higher among DHF2 lines than among DHF1 lines for moisture, but not for yield and plant height. The ratio of repulsion to coupling linkages, which was estimated from genomewide marker effects, was higher among DHF1 lines than among DHF2 lines for moisture, but not for yield and plant height. The higher genetic variance for moisture among DHF2 lines did not lead to lower moisture of the best 10 % of the lines. Our results indicated that the decision of inducing DH lines from F 1 or F 2 plants needs to be made from considerations other than the performance of the resulting DHF1 or DHF2 lines.

  6. Optimally estimating the sample mean from the sample size, median, mid-range, and/or mid-quartile range.

    PubMed

    Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun

    2018-06-01

    The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.

  7. Design of robust systems by means of the numerical optimization with harmonic changing of the model parameters

    NASA Astrophysics Data System (ADS)

    Zhmud, V. A.; Reva, I. L.; Dimitrov, L. V.

    2017-01-01

    The design of robust feedback systems by means of the numerical optimization method is mostly accomplished with modeling of the several systems simultaneously. In each such system, regulators are similar. But the object models are different. It includes all edge values from the possible variants of the object model parameters. With all this, not all possible sets of model parameters are taken into account. Hence, the regulator can be not robust, i. e. it can not provide system stability in some cases, which were not tested during the optimization procedure. The paper proposes an alternative method. It consists in sequent changing of all parameters according to harmonic low. The frequencies of changing of each parameter are aliquant. It provides full covering of the parameters space.

  8. Nonnormality increases variance of gravity waves trapped in a tilted box

    NASA Astrophysics Data System (ADS)

    Harlander, Uwe; Borcia, Ion Dan; Krebs, Andreas

    2017-04-01

    We study the prototype problem of internal gravity waves in a square domain tilted with respect to the gravity vector by an angle theta. Only when theta is zero regular normal modes exist, for all other angles wave attractors and singularities dominate the flow. We show that the linear operator of the governing PDE becomes non-normal for nonzero theta giving rise to non-modal transient growth. This growth depends on the underlying norm: for the variance norm significant growth rates can be found whereas for the energy norm, no growth is possible since there is no source for energy (in contrast to shear fows, for which the mean flow feeds the perturbations). We continue by showing that the nonnormality of the system matrix is increasing with theta and reaches a maximum when theta is 45 degree. Moreover, the growth rate is increasing as can be expected from the increasing nonnormality of the matrix. Our results imply that at least the most simple wave attractors can be seen as those initial flow fields that gain most of the variance during a given time period.

  9. Thermodynamic characterization of synchronization-optimized oscillator networks

    NASA Astrophysics Data System (ADS)

    Yanagita, Tatsuo; Ichinomiya, Takashi

    2014-12-01

    We consider a canonical ensemble of synchronization-optimized networks of identical oscillators under external noise. By performing a Markov chain Monte Carlo simulation using the Kirchhoff index, i.e., the sum of the inverse eigenvalues of the Laplacian matrix (as a graph Hamiltonian of the network), we construct more than 1 000 different synchronization-optimized networks. We then show that the transition from star to core-periphery structure depends on the connectivity of the network, and is characterized by the node degree variance of the synchronization-optimized ensemble. We find that thermodynamic properties such as heat capacity show anomalies for sparse networks.

  10. Synaptic Transmission Optimization Predicts Expression Loci of Long-Term Plasticity.

    PubMed

    Costa, Rui Ponte; Padamsey, Zahid; D'Amour, James A; Emptage, Nigel J; Froemke, Robert C; Vogels, Tim P

    2017-09-27

    Long-term modifications of neuronal connections are critical for reliable memory storage in the brain. However, their locus of expression-pre- or postsynaptic-is highly variable. Here we introduce a theoretical framework in which long-term plasticity performs an optimization of the postsynaptic response statistics toward a given mean with minimal variance. Consequently, the state of the synapse at the time of plasticity induction determines the ratio of pre- and postsynaptic modifications. Our theory explains the experimentally observed expression loci of the hippocampal and neocortical synaptic potentiation studies we examined. Moreover, the theory predicts presynaptic expression of long-term depression, consistent with experimental observations. At inhibitory synapses, the theory suggests a statistically efficient excitatory-inhibitory balance in which changes in inhibitory postsynaptic response statistics specifically target the mean excitation. Our results provide a unifying theory for understanding the expression mechanisms and functions of long-term synaptic transmission plasticity. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f...

  12. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    PubMed

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  13. Female scarcity reduces women's marital ages and increases variance in men's marital ages.

    PubMed

    Kruger, Daniel J; Fitzgerald, Carey J; Peterson, Tom

    2010-08-05

    When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.

  14. An Analysis of Variance in Teacher Self-Efficacy Levels Dependent on Participation Time in Professional Learning Communities

    ERIC Educational Resources Information Center

    Marx, Megan D.

    2016-01-01

    The purpose of this study was to determine variance in mean levels of teacher self-efficacy (TSE) and its three factors--efficacy in student engagement (EIS), efficacy in instructional strategies (EIS), and efficacy in classroom management (ECM)--based on participation and time spent in professional learning communities (PLCs). In this…

  15. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time..., mental hospital, and ICF located within a 50-mile radius of the facility; (e) The distance and average...

  16. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (formerly the Radiation Control for Health and Safety Act of 1968), and: (i) The scope of the requested... FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or more...

  17. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  18. Quantum mechanical expansion of variance of a particle in a weakly non-uniform electric and magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Poh Kam; Kosaka, Wataru; Oikawa, Shun-ichi

    We have solved the Heisenberg equation of motion for the time evolution of the position and momentum operators for a non-relativistic spinless charged particle in the presence of a weakly non-uniform electric and magnetic field. It is shown that the drift velocity operator obtained in this study agrees with the classical counterpart, and that, using the time dependent operators, the variances in position and momentum grow with time. The expansion rate of variance in position and momentum are dependent on the magnetic gradient scale length, however, independent of the electric gradient scale length. In the presence of a weakly non-uniformmore » electric and magnetic field, the theoretical expansion rates of variance expansion are in good agreement with the numerical analysis. It is analytically shown that the variance in position reaches the square of the interparticle separation, which is the characteristic time much shorter than the proton collision time of plasma fusion. After this time, the wavefunctions of the neighboring particles would overlap, as a result, the conventional classical analysis may lose its validity. The broad distribution of individual particle in space means that their Coulomb interactions with other particles become weaker than that expected in classical mechanics.« less

  19. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  20. Variance decomposition in stochastic simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less

  1. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. Robust versus consistent variance estimators in marginal structural Cox models.

    PubMed

    Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris

    2018-06-11

    In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.

  3. 40 CFR 142.302 - Who can issue a small system variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... General Provisions § 142.302 Who can issue a small system variance? A small system variance under this...

  4. Risk and utility in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Cohen, Morrel H.; Natoli, Vincent D.

    2003-06-01

    Modern portfolio theory (MPT) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets. In the original mean-variance approach of Markowitz, volatility is taken as a proxy for risk, conflating uncertainty with risk. There have been many subsequent attempts to alleviate that weakness which, typically, combine utility and risk. We present here a modification of MPT based on the inclusion of separate risk and utility criteria. We define risk as the probability of failure to meet a pre-established investment goal. We define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield. The emphasis throughout is on long investment horizons for which risk-free assets do not exist. Analytic results are presented for a Gaussian probability distribution. Risk-utility relations are explored via empirical stock-price data, and an illustrative portfolio is optimized using the empirical data.

  5. Investigation of effective decision criteria for multiobjective optimization in IMRT.

    PubMed

    Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H

    2011-06-01

    To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D

  6. Effect of mitomycin-C on the variance in refractive outcomes after photorefractive keratectomy.

    PubMed

    Sy, Mary Ellen; Zhang, Lijun; Yeroushalmi, Allen; Huang, Derek; Hamilton, D Rex

    2014-12-01

    To compare the variance in manifest refraction spherical equivalent (MRSE) after photorefractive keratectomy (PRK) with mitomycin-C (MMC), PRK without MMC, and laser in situ keratomileusis (LASIK) for the treatment of myopic astigmatism. Jules Stein Eye Institute, University of California, Los Angeles, Los Angeles, California, USA. Retrospective case series. Patients were classified into 3 groups of preoperative refraction-matched eyes as follows: PRK with MMC 0.02%, PRK without MMC, and LASIK. The preoperative and postoperative MRSE, preoperative corrected distance visual acuity, and postoperative uncorrected distance visual acuity (UDVA) were analyzed. Each group comprised 30 eyes. Follow-up was at least 6 months in the LASIK group and 12 months in the 2 PRK groups. There were no statistically significant differences in the mean preoperative MRSE (P=.95) or postoperative MRSE (P=.06) between the 3 groups. The mean postoperative MRSE was -0.07 diopter (D) ± 0.47 (SD), -0.14 ± 0.26 D, and 0.02 ± 0.25 D in the PRK with MMC 0.02% group, PRK without MMC group, and LASIK group, respectively. The variance in the postoperative MRSE in the PRK with MMC 0.02% group was significantly higher than that in the PRK without MMC group (P=.002) and in the LASIK group (P=.001). There was no statistically significant difference in the mean postoperative UDVA between the 3 groups (P=.47). Refractive outcomes after PRK for myopia were more variable when MMC 0.02% was used. This should be weighed against the advantage of intraoperative MMC use in reducing haze after PRK. Copyright © 2014 ASCRS and ESCRS. All rights reserved.

  7. The link between diffusion MRI and tumor heterogeneity: Mapping cell eccentricity and density by diffusional variance decomposition (DIVIDE).

    PubMed

    Szczepankiewicz, Filip; van Westen, Danielle; Englund, Elisabet; Westin, Carl-Fredrik; Ståhlberg, Freddy; Lätt, Jimmy; Sundgren, Pia C; Nilsson, Markus

    2016-11-15

    The structural heterogeneity of tumor tissue can be probed by diffusion MRI (dMRI) in terms of the variance of apparent diffusivities within a voxel. However, the link between the diffusional variance and the tissue heterogeneity is not well-established. To investigate this link we test the hypothesis that diffusional variance, caused by microscopic anisotropy and isotropic heterogeneity, is associated with variable cell eccentricity and cell density in brain tumors. We performed dMRI using a novel encoding scheme for diffusional variance decomposition (DIVIDE) in 7 meningiomas and 8 gliomas prior to surgery. The diffusional variance was quantified from dMRI in terms of the total mean kurtosis (MK T ), and DIVIDE was used to decompose MK T into components caused by microscopic anisotropy (MK A ) and isotropic heterogeneity (MK I ). Diffusion anisotropy was evaluated in terms of the fractional anisotropy (FA) and microscopic fractional anisotropy (μFA). Quantitative microscopy was performed on the excised tumor tissue, where structural anisotropy and cell density were quantified by structure tensor analysis and cell nuclei segmentation, respectively. In order to validate the DIVIDE parameters they were correlated to the corresponding parameters derived from microscopy. We found an excellent agreement between the DIVIDE parameters and corresponding microscopy parameters; MK A correlated with cell eccentricity (r=0.95, p<10 -7 ) and MK I with the cell density variance (r=0.83, p<10 -3 ). The diffusion anisotropy correlated with structure tensor anisotropy on the voxel-scale (FA, r=0.80, p<10 -3 ) and microscopic scale (μFA, r=0.93, p<10 -6 ). A multiple regression analysis showed that the conventional MK T parameter reflects both variable cell eccentricity and cell density, and therefore lacks specificity in terms of microstructure characteristics. However, specificity was obtained by decomposing the two contributions; MK A was associated only to cell eccentricity

  8. ACCOUNTING FOR COSMIC VARIANCE IN STUDIES OF GRAVITATIONALLY LENSED HIGH-REDSHIFT GALAXIES IN THE HUBBLE FRONTIER FIELD CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Brant E.; Stark, Dan P.; Ellis, Richard S.

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ∼35% atmore » redshift z ∼ 7 to ≳ 65% at z ∼ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.« less

  9. Accounting for Cosmic Variance in Studies of Gravitationally Lensed High-redshift Galaxies in the Hubble Frontier Field Clusters

    NASA Astrophysics Data System (ADS)

    Robertson, Brant E.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; Stark, Dan P.; McLeod, Derek

    2014-12-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ~35% at redshift z ~ 7 to >~ 65% at z ~ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  10. A stochastic hybrid model for pricing forward-start variance swaps

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah

    2017-11-01

    Recently, market players have been exposed to the astounding increase in the trading volume of variance swaps. In this paper, the forward-start nature of a variance swap is being inspected, where hybridizations of equity and interest rate models are used to evaluate the price of discretely-sampled forward-start variance swaps. The Heston stochastic volatility model is being extended to incorporate the dynamics of the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. This is essential since previous studies on variance swaps were mainly focusing on instantaneous-start variance swaps without considering the interest rate effects. This hybrid model produces an efficient semi-closed form pricing formula through the development of forward characteristic functions. The performance of this formula is investigated via simulations to demonstrate how the formula performs for different sampling times and against the real market scenario. Comparison done with the Monte Carlo simulation which was set as our main reference point reveals that our pricing formula gains almost the same precision in a shorter execution time.

  11. Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming

    USGS Publications Warehouse

    Karlinger, M.R.; Skrivan, James A.

    1981-01-01

    Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)

  12. On Reverse Stackelberg Game and Optimal Mean Field Control for a Large Population of Thermostatically Controlled Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This paper studies a multi-stage pricing problem for a large population of thermostatically controlled loads. The problem is formulated as a reverse Stackelberg game that involves a mean field game in the hierarchy of decision making. In particular, in the higher level, a coordinator needs to design a pricing function to motivate individual agents to maximize the social welfare. In the lower level, the individual utility maximization problem of each agent forms a mean field game coupled through the pricing function that depends on the average of the population control/state. We derive the solution to the reverse Stackelberg game bymore » connecting it to a team problem and the competitive equilibrium, and we show that this solution corresponds to the optimal mean field control that maximizes the social welfare. Realistic simulations are presented to validate the proposed methods.« less

  13. Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle

    NASA Astrophysics Data System (ADS)

    Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun

    2018-05-01

    The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.

  14. Variance and covariance estimates for weaning weight of Senepol cattle.

    PubMed

    Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S

    1991-10-01

    Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively.

  15. The genetic variance but not the genetic covariance of life-history traits changes towards the north in a time-constrained insect.

    PubMed

    Sniegula, Szymon; Golab, Maria J; Drobniak, Szymon M; Johansson, Frank

    2018-06-01

    Seasonal time constraints are usually stronger at higher than lower latitudes and can exert strong selection on life-history traits and the correlations among these traits. To predict the response of life-history traits to environmental change along a latitudinal gradient, information must be obtained about genetic variance in traits and also genetic correlation between traits, that is the genetic variance-covariance matrix, G. Here, we estimated G for key life-history traits in an obligate univoltine damselfly that faces seasonal time constraints. We exposed populations to simulated native temperatures and photoperiods and common garden environmental conditions in a laboratory set-up. Despite differences in genetic variance in these traits between populations (lower variance at northern latitudes), there was no evidence for latitude-specific covariance of the life-history traits. At simulated native conditions, all populations showed strong genetic and phenotypic correlations between traits that shaped growth and development. The variance-covariance matrix changed considerably when populations were exposed to common garden conditions compared with the simulated natural conditions, showing the importance of environmentally induced changes in multivariate genetic structure. Our results highlight the importance of estimating variance-covariance matrixes in environments that mimic selection pressures and not only trait variances or mean trait values in common garden conditions for understanding the trait evolution across populations and environments. © 2018 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2018 European Society For Evolutionary Biology.

  16. Estimation of Additive, Dominance, and Imprinting Genetic Variance Using Genomic Data

    PubMed Central

    Lopes, Marcos S.; Bastiaansen, John W. M.; Janss, Luc; Knol, Egbert F.; Bovenhuis, Henk

    2015-01-01

    Traditionally, exploration of genetic variance in humans, plants, and livestock species has been limited mostly to the use of additive effects estimated using pedigree data. However, with the development of dense panels of single-nucleotide polymorphisms (SNPs), the exploration of genetic variation of complex traits is moving from quantifying the resemblance between family members to the dissection of genetic variation at individual loci. With SNPs, we were able to quantify the contribution of additive, dominance, and imprinting variance to the total genetic variance by using a SNP regression method. The method was validated in simulated data and applied to three traits (number of teats, backfat, and lifetime daily gain) in three purebred pig populations. In simulated data, the estimates of additive, dominance, and imprinting variance were very close to the simulated values. In real data, dominance effects account for a substantial proportion of the total genetic variance (up to 44%) for these traits in these populations. The contribution of imprinting to the total phenotypic variance of the evaluated traits was relatively small (1–3%). Our results indicate a strong relationship between additive variance explained per chromosome and chromosome length, which has been described previously for other traits in other species. We also show that a similar linear relationship exists for dominance and imprinting variance. These novel results improve our understanding of the genetic architecture of the evaluated traits and shows promise to apply the SNP regression method to other traits and species, including human diseases. PMID:26438289

  17. Associations among Aspects of Meaning in Life and Death Anxiety in Young Adults

    ERIC Educational Resources Information Center

    Lyke, Jennifer

    2013-01-01

    This investigation explored the relationship between two aspects of meaning in life, presence of meaning in life and search for meaning in life, and the fear of death and dying in young adults. A community sample of participants ("N" = 168) completed measures of meaning in life and death anxiety. A multivariate analysis of variance was…

  18. Estimation of Variance in the Case of Complex Samples.

    ERIC Educational Resources Information Center

    Groenewald, A. C.; Stoker, D. J.

    In a complex sampling scheme it is desirable to select the primary sampling units (PSUs) without replacement to prevent duplications in the sample. Since the estimation of the sampling variances is more complicated when the PSUs are selected without replacement, L. Kish (1965) recommends that the variance be calculated using the formulas…

  19. Investigation of temporal vascular effects induced by focused ultrasound treatment with speckle-variance optical coherence tomography

    PubMed Central

    Tsai, Meng-Tsan; Chang, Feng-Yu; Lee, Cheng-Kuang; Gong, Cihun-Siyong Alex; Lin, Yu-Xiang; Lee, Jiann-Der; Yang, Chih-Hsun; Liu, Hao-Li

    2014-01-01

    Focused ultrasound (FUS) can be used to locally and temporally enhance vascular permeability, improving the efficiency of drug delivery from the blood vessels into the surrounding tissue. However, it is difficult to evaluate in real time the effect induced by FUS and to noninvasively observe the permeability enhancement. In this study, speckle-variance optical coherence tomography (SVOCT) was implemented for the investigation of temporal effects on vessels induced by FUS treatment. With OCT scanning, the dynamic change in vessels during FUS exposure can be observed and studied. Moreover, the vascular effects induced by FUS treatment with and without the presence of microbubbles were investigated and quantitatively compared. Additionally, 2D and 3D speckle-variance images were used for quantitative observation of blood leakage from vessels due to the permeability enhancement caused by FUS, which could be an indicator that can be used to determine the influence of FUS power exposure. In conclusion, SVOCT can be a useful tool for monitoring FUS treatment in real time, facilitating the dynamic observation of temporal effects and helping to determine the optimal FUS power. PMID:25071945

  20. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...