NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Goldstein, M. L.
2006-01-01
We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-14
... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
Large amplitude MHD waves upstream of the Jovian bow shock
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.
1983-01-01
Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.
A New Method for Estimating the Effective Population Size from Allele Frequency Changes
Pollak, Edward
1983-01-01
A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis, Alfredo
The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.
Minimal Model of Prey Localization through the Lateral-Line System
NASA Astrophysics Data System (ADS)
Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo
2003-10-01
The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.
Bertrand, Alexander; Seo, Dongjin; Maksimovic, Filip; Carmena, Jose M; Maharbiz, Michel M; Alon, Elad; Rabaey, Jan M
2014-01-01
In this paper, we examine the use of beamforming techniques to interrogate a multitude of neural implants in a distributed, ultrasound-based intra-cortical recording platform known as Neural Dust. We propose a general framework to analyze system design tradeoffs in the ultrasonic beamformer that extracts neural signals from modulated ultrasound waves that are backscattered by free-floating neural dust (ND) motes. Simulations indicate that high-resolution linearly-constrained minimum variance beamforming sufficiently suppresses interference from unselected ND motes and can be incorporated into the ND-based cortical recording system.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Lekking without a paradox in the buff-breasted sandpiper
Lanctot, Richard B.; Scribner, Kim T.; Kempenaers, Bart; Weatherhead, Patrick J.
1997-01-01
Females in lek‐breeding species appear to copulate with a small subset of the available males. Such strong directional selection is predicted to decrease additive genetic variance in the preferred male traits, yet females continue to mate selectively, thus generating the lek paradox. In a study of buff‐breasted sandpipers (Tryngites subruficollis), we combine detailed behavioral observations with paternity analyses using single‐locus minisatellite DNA probes to provide the first evidence from a lek‐breeding species that the variance in male reproductive success is much lower than expected. In 17 and 30 broods sampled in two consecutive years, a minimum of 20 and 39 males, respectively, sired offspring. This low variance in male reproductive success resulted from effective use of alternative reproductive tactics by males, females mating with solitary males off leks, and multiple mating by females. Thus, the results of this study suggests that sexual selection through female choice is weak in buff‐breasted sandpipers. The behavior of other lek‐breeding birds is sufficiently similar to that of buff‐breasted sandpipers that paternity studies of those species should be conducted to determine whether leks generally are less paradoxical than they appear.
NASA Astrophysics Data System (ADS)
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.
Solution Methods for Certain Evolution Equations
NASA Astrophysics Data System (ADS)
Vega-Guzman, Jose Manuel
Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Louis A; Mason, John J.
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less
A method for minimum risk portfolio optimization under hybrid uncertainty
NASA Astrophysics Data System (ADS)
Egorova, Yu E.; Yazenin, A. V.
2018-03-01
In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
Combinatorics of least-squares trees.
Mihaescu, Radu; Pachter, Lior
2008-09-09
A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.
Cohn, Timothy A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.
Reexamining the minimum viable population concept for long-lived species.
Shoemaker, Kevin T; Breisch, Alvin R; Jaycox, Jesse W; Gibbs, James P
2013-06-01
For decades conservation biologists have proposed general rules of thumb for minimum viable population size (MVP); typically, they range from hundreds to thousands of individuals. These rules have shifted conservation resources away from small and fragmented populations. We examined whether iteroparous, long-lived species might constitute an exception to general MVP guidelines. On the basis of results from a 10-year capture-recapture study in eastern New York (U.S.A.), we developed a comprehensive demographic model for the globally threatened bog turtle (Glyptemys muhlenbergii), which is designated as endangered by the IUCN in 2011. We assessed population viability across a wide range of initial abundances and carrying capacities. Not accounting for inbreeding, our results suggest that bog turtle colonies with as few as 15 breeding females have >90% probability of persisting for >100 years, provided vital rates and environmental variance remain at currently estimated levels. On the basis of our results, we suggest that MVP thresholds may be 1-2 orders of magnitude too high for many long-lived organisms. Consequently, protection of small and fragmented populations may constitute a viable conservation option for such species, especially in a regional or metapopulation context. © 2013 Society for Conservation Biology.
Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George
2013-01-01
The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101
NASA Astrophysics Data System (ADS)
Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza
2018-02-01
In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
Some refinements on the comparison of areal sampling methods via simulation
Jeffrey Gove
2017-01-01
The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...
A comparison of coronal and interplanetary current sheet inclinations
NASA Technical Reports Server (NTRS)
Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.
1983-01-01
The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.
NASA Astrophysics Data System (ADS)
Chen, Sang; Hoffmann, Sharon S.; Lund, David C.; Cobb, Kim M.; Emile-Geay, Julien; Adkins, Jess F.
2016-05-01
The El Niño-Southern Oscillation (ENSO) is the primary driver of interannual climate variability in the tropics and subtropics. Despite substantial progress in understanding ocean-atmosphere feedbacks that drive ENSO today, relatively little is known about its behavior on centennial and longer timescales. Paleoclimate records from lakes, corals, molluscs and deep-sea sediments generally suggest that ENSO variability was weaker during the mid-Holocene (4-6 kyr BP) than the late Holocene (0-4 kyr BP). However, discrepancies amongst the records preclude a clear timeline of Holocene ENSO evolution and therefore the attribution of ENSO variability to specific climate forcing mechanisms. Here we present δ18 O results from a U-Th dated speleothem in Malaysian Borneo sampled at sub-annual resolution. The δ18 O of Borneo rainfall is a robust proxy of regional convective intensity and precipitation amount, both of which are directly influenced by ENSO activity. Our estimates of stalagmite δ18 O variance at ENSO periods (2-7 yr) show a significant reduction in interannual variability during the mid-Holocene (3240-3380 and 5160-5230 yr BP) relative to both the late Holocene (2390-2590 yr BP) and early Holocene (6590-6730 yr BP). The Borneo results are therefore inconsistent with lacustrine records of ENSO from the eastern equatorial Pacific that show little or no ENSO variance during the early Holocene. Instead, our results support coral, mollusc and foraminiferal records from the central and eastern equatorial Pacific that show a mid-Holocene minimum in ENSO variance. Reduced mid-Holocene interannual δ18 O variability in Borneo coincides with an overall minimum in mean δ18 O from 3.5 to 5.5 kyr BP. Persistent warm pool convection would tend to enhance the Walker circulation during the mid-Holocene, which likely contributed to reduced ENSO variance during this period. This finding implies that both convective intensity and interannual variability in Borneo are driven by coupled air-sea dynamics that are sensitive to precessional insolation forcing. Isolating the exact mechanisms that drive long-term ENSO evolution will require additional high-resolution paleoclimatic reconstructions and further investigation of Holocene tropical climate evolution using coupled climate models.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136
Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...
A test of source-surface model predictions of heliospheric current sheet inclination
NASA Technical Reports Server (NTRS)
Burton, M. E.; Crooker, N. U.; Siscoe, G. L.; Smith, E. J.
1994-01-01
The orientation of the heliospheric current sheet predicted from a source surface model is compared with the orientation determined from minimum-variance analysis of International Sun-Earth Explorer (ISEE) 3 magnetic field data at 1 AU near solar maximum. Of the 37 cases analyzed, 28 have minimum variance normals that lie orthogonal to the predicted Parker spiral direction. For these cases, the correlation coefficient between the predicted and measured inclinations is 0.6. However, for the subset of 14 cases for which transient signatures (either interplanetary shocks or bidirectional electrons) are absent, the agreement in inclinations improves dramatically, with a correlation coefficient of 0.96. These results validate not only the use of the source surface model as a predictor but also the previously questioned usefulness of minimum variance analysis across complex sector boundaries. In addition, the results imply that interplanetary dynamics have little effect on current sheet inclination at 1 AU. The dependence of the correlation on transient occurrence suggests that the leading edge of a coronal mass ejection (CME), where transient signatures are detected, disrupts the heliospheric current sheet but that the sheet re-forms between the trailing legs of the CME. In this way the global structure of the heliosphere, reflected both in the source surface maps and in the interplanetary sector structure, can be maintained even when the CME occurrence rate is high.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Vegetation greenness impacts on maximum and minimum temperatures in northeast Colorado
Hanamean, J. R.; Pielke, R.A.; Castro, C. L.; Ojima, D.S.; Reed, Bradley C.; Gao, Z.
2003-01-01
The impact of vegetation on the microclimate has not been adequately considered in the analysis of temperature forecasting and modelling. To fill part of this gap, the following study was undertaken.A daily 850–700 mb layer mean temperature, computed from the National Center for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis, and satellite-derived greenness values, as defined by NDVI (Normalised Difference Vegetation Index), were correlated with surface maximum and minimum temperatures at six sites in northeast Colorado for the years 1989–98. The NDVI values, representing landscape greenness, act as a proxy for latent heat partitioning via transpiration. These sites encompass a wide array of environments, from irrigated-urban to short-grass prairie. The explained variance (r2 value) of surface maximum and minimum temperature by only the 850–700 mb layer mean temperature was subtracted from the corresponding explained variance by the 850–700 mb layer mean temperature and NDVI values. The subtraction shows that by including NDVI values in the analysis, the r2 values, and thus the degree of explanation of the surface temperatures, increase by a mean of 6% for the maxima and 8% for the minima over the period March–October. At most sites, there is a seasonal dependence in the explained variance of the maximum temperatures because of the seasonal cycle of plant growth and senescence. Between individual sites, the highest increase in explained variance occurred at the site with the least amount of anthropogenic influence. This work suggests the vegetation state needs to be included as a factor in surface temperature forecasting, numerical modeling, and climate change assessments.
Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region
NASA Astrophysics Data System (ADS)
Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.
2005-08-01
Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2001-01-01
The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.
Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A
2011-09-01
The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Wagenaar, Alexander C; Maldonado-Molina, Mildred M; Erickson, Darin J; Ma, Linan; Tobler, Amy L; Komro, Kelli A
2007-09-01
We examined effects of state statutory changes in DUI fine or jail penalties for firsttime offenders from 1976 to 2002. A quasi-experimental time-series design was used (n=324 monthly observations). Four outcome measures of drivers involved in alcohol-related fatal crashes are: single-vehicle nighttime, low BAC (0.01-0.07g/dl), medium BAC (0.08-0.14g/dl), high BAC (>/=0.15g/dl). All analyses of BAC outcomes included multiple imputation procedures for cases with missing data. Comparison series of non-alcohol-related crashes were included to efficiently control for effects of other factors. Statistical models include state-specific Box-Jenkins ARIMA models, and pooled general linear mixed models. Twenty-six states implemented mandatory minimum fine policies and 18 states implemented mandatory minimum jail penalties. Estimated effects varied widely from state to state. Using variance weighted meta-analysis methods to aggregate results across states, mandatory fine policies are associated with an average reduction in fatal crash involvement by drivers with BAC>/=0.08g/dl of 8% (averaging 13 per state per year). Mandatory minimum jail policies are associated with a decline in single-vehicle nighttime fatal crash involvement of 6% (averaging 5 per state per year), and a decline in low-BAC cases of 9% (averaging 3 per state per year). No significant effects were observed for the other outcome measures. The overall pattern of results suggests a possible effect of mandatory fine policies in some states, but little effect of mandatory jail policies.
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation
NASA Astrophysics Data System (ADS)
Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong
2017-05-01
Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.
On the design of classifiers for crop inventories
NASA Technical Reports Server (NTRS)
Heydorn, R. P.; Takacs, H. C.
1986-01-01
Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.
Minimum-variance Brownian motion control of an optically trapped probe.
Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang
2009-10-20
This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
River meanders - Theory of minimum variance
Langbein, Walter Basil; Leopold, Luna Bergere
1966-01-01
Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
RFI in hybrid loops - Simulation and experimental results.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.
1972-01-01
A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Charged particle tracking at Titan, and further applications
NASA Astrophysics Data System (ADS)
Bebesi, Zsofia; Erdos, Geza; Szego, Karoly
2016-04-01
We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.
Microstructure of the IMF turbulences at 2.5 AU
NASA Technical Reports Server (NTRS)
Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.
1995-01-01
A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.
Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.
2009-02-01
A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.
Robust design of a 2-DOF GMV controller: a direct self-tuning and fuzzy scheduling approach.
Silveira, Antonio S; Rodríguez, Jaime E N; Coelho, Antonio A R
2012-01-01
This paper presents a study on self-tuning control strategies with generalized minimum variance control in a fixed two degree of freedom structure-or simply GMV2DOF-within two adaptive perspectives. One, from the process model point of view, using a recursive least squares estimator algorithm for direct self-tuning design, and another, using a Mamdani fuzzy GMV2DOF parameters scheduling technique based on analytical and physical interpretations from robustness analysis of the system. Both strategies are assessed by simulation and real plants experimentation environments composed of a damped pendulum and an under development wind tunnel from the Department of Automation and Systems of the Federal University of Santa Catarina. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
A CLT on the SNR of Diagonally Loaded MVDR Filters
NASA Astrophysics Data System (ADS)
Rubio, Francisco; Mestre, Xavier; Hachem, Walid
2012-08-01
This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.
Bijma, Piter
2011-01-01
Genetic selection is a major force shaping life on earth. In classical genetic theory, response to selection is the product of the strength of selection and the additive genetic variance in a trait. The additive genetic variance reflects a population’s intrinsic potential to respond to selection. The ordinary additive genetic variance, however, ignores the social organization of life. With social interactions among individuals, individual trait values may depend on genes in others, a phenomenon known as indirect genetic effects. Models accounting for indirect genetic effects, however, lack a general definition of heritable variation. Here I propose a general definition of the heritable variation that determines the potential of a population to respond to selection. This generalizes the concept of heritable variance to any inheritance model and level of organization. The result shows that heritable variance determining potential response to selection is the variance among individuals in the heritable quantity that determines the population mean trait value, rather than the usual additive genetic component of phenotypic variance. It follows, therefore, that heritable variance may exceed phenotypic variance among individuals, which is impossible in classical theory. This work also provides a measure of the utilization of heritable variation for response to selection and integrates two well-known models of maternal genetic effects. The result shows that relatedness between the focal individual and the individuals affecting its fitness is a key determinant of the utilization of heritable variance for response to selection. PMID:21926298
Bijma, Piter
2011-12-01
Genetic selection is a major force shaping life on earth. In classical genetic theory, response to selection is the product of the strength of selection and the additive genetic variance in a trait. The additive genetic variance reflects a population's intrinsic potential to respond to selection. The ordinary additive genetic variance, however, ignores the social organization of life. With social interactions among individuals, individual trait values may depend on genes in others, a phenomenon known as indirect genetic effects. Models accounting for indirect genetic effects, however, lack a general definition of heritable variation. Here I propose a general definition of the heritable variation that determines the potential of a population to respond to selection. This generalizes the concept of heritable variance to any inheritance model and level of organization. The result shows that heritable variance determining potential response to selection is the variance among individuals in the heritable quantity that determines the population mean trait value, rather than the usual additive genetic component of phenotypic variance. It follows, therefore, that heritable variance may exceed phenotypic variance among individuals, which is impossible in classical theory. This work also provides a measure of the utilization of heritable variation for response to selection and integrates two well-known models of maternal genetic effects. The result shows that relatedness between the focal individual and the individuals affecting its fitness is a key determinant of the utilization of heritable variance for response to selection.
The relationship between seasonal mood change and personality: more apparent than real?
Jang, K L; Lam, R W; Livesley, W J; Vernon, P A
1997-06-01
A number of recent research reports have reported significant relationships between seasonal mood change (seasonality) and personality. However, some of the results are difficult to interpret because of inherent methodological problems, the most important of which is the use of samples drawn from the southern as opposed to the northern hemisphere, where the phenomenon of seasonality may be quite different. The present study examined the relationship between personality and seasonality in a sample from the northern hemisphere (minimum latitude = 49 degrees N). A total of 297 adults drawn from the general population (112 male and 185 female subjects) completed the Seasonal Pattern Assessment Questionnaire, and the results obtained confirmed most of the previously reported relationships and showed that these are reliable across (i) different hemispheres, (ii) different measures of personality and (iii) clinical and general population samples. However, the impact of the relationship seems to be more apparent than real, with personality accounting for just under 15% of the total variance.
29 CFR 4.159 - General minimum wage.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mainemer, C. I.
1978-01-01
The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Spectral Ambiguity of Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Code of Federal Regulations, 2014 CFR
2014-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Code of Federal Regulations, 2013 CFR
2013-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Solar Drivers of 11-yr and Long-Term Cosmic Ray Modulation
NASA Technical Reports Server (NTRS)
Cliver, E. W.; Richardson, I. G.; Ling, A. G.
2011-01-01
In the current paradigm for the modulation of galactic cosmic rays (GCRs), diffusion is taken to be the dominant process during solar maxima while drift dominates at minima. Observations during the recent solar minimum challenge the pre-eminence of drift: at such times. In 2009, the approx.2 GV GCR intensity measured by the Newark neutron monitor increased by approx.5% relative to its maximum value two cycles earlier even though the average tilt angle in 2009 was slightly larger than that in 1986 (approx.20deg vs. approx.14deg), while solar wind B was significantly lower (approx.3.9 nT vs. approx.5.4 nT). A decomposition of the solar wind into high-speed streams, slow solar wind, and coronal mass ejections (CMEs; including postshock flows) reveals that the Sun transmits its message of changing magnetic field (diffusion coefficient) to the heliosphere primarily through CMEs at solar maximum and high-speed streams at solar minimum. Long-term reconstructions of solar wind B are in general agreement for the approx. 1900-present interval and can be used to reliably estimate GCR intensity over this period. For earlier epochs, however, a recent Be-10-based reconstruction covering the past approx. 10(exp 4) years shows nine abrupt and relatively short-lived drops of B to < or approx.= 0 nT, with the first of these corresponding to the Sporer minimum. Such dips are at variance with the recent suggestion that B has a minimum or floor value of approx.2.8 nT. A floor in solar wind B implies a ceiling in the GCR intensity (a permanent modulation of the local interstellar spectrum) at a given energy/rigidity. The 30-40% increase in the intensity of 2.5 GV electrons observed by Ulysses during the recent solar minimum raises an interesting paradox that will need to be resolved.
Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.
ERIC Educational Resources Information Center
Glutting, Joseph J.; McDermott, Paul A.
1990-01-01
Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…
A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)
2010-01-01
processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
General object recognition is specific: Evidence from novel and familiar objects.
Richler, Jennifer J; Wilmer, Jeremy B; Gauthier, Isabel
2017-09-01
In tests of object recognition, individual differences typically correlate modestly but nontrivially across familiar categories (e.g. cars, faces, shoes, birds, mushrooms). In theory, these correlations could reflect either global, non-specific mechanisms, such as general intelligence (IQ), or more specific mechanisms. Here, we introduce two separate methods for effectively capturing category-general performance variation, one that uses novel objects and one that uses familiar objects. In each case, we show that category-general performance variance is unrelated to IQ, thereby implicating more specific mechanisms. The first approach examines three newly developed novel object memory tests (NOMTs). We predicted that NOMTs would exhibit more shared, category-general variance than familiar object memory tests (FOMTs) because novel objects, unlike familiar objects, lack category-specific environmental influences (e.g. exposure to car magazines or botany classes). This prediction held, and remarkably, virtually none of the substantial shared variance among NOMTs was explained by IQ. Also, while NOMTs correlated nontrivially with two FOMTs (faces, cars), these correlations were smaller than among NOMTs and no larger than between the face and car tests themselves, suggesting that the category-general variance captured by NOMTs is specific not only relative to IQ, but also, to some degree, relative to both face and car recognition. The second approach averaged performance across multiple FOMTs, which we predicted would increase category-general variance by averaging out category-specific factors. This prediction held, and as with NOMTs, virtually none of the shared variance among FOMTs was explained by IQ. Overall, these results support the existence of object recognition mechanisms that, though category-general, are specific relative to IQ and substantially separable from face and car recognition. They also add sensitive, well-normed NOMTs to the tools available to study object recognition. Copyright © 2017 Elsevier B.V. All rights reserved.
Husby, Arild; Gustafsson, Lars; Qvarnström, Anna
2012-01-01
The avian incubation period is associated with high energetic costs and mortality risks suggesting that there should be strong selection to reduce the duration to the minimum required for normal offspring development. Although there is much variation in the duration of the incubation period across species, there is also variation within species. It is necessary to estimate to what extent this variation is genetically determined if we want to predict the evolutionary potential of this trait. Here we use a long-term study of collared flycatchers to examine the genetic basis of variation in incubation duration. We demonstrate limited genetic variance as reflected in the low and nonsignificant additive genetic variance, with a corresponding heritability of 0.04 and coefficient of additive genetic variance of 2.16. Any selection acting on incubation duration will therefore be inefficient. To our knowledge, this is the first time heritability of incubation duration has been estimated in a natural bird population. © 2011 by The University of Chicago.
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu
2007-01-01
As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
ERIC Educational Resources Information Center
Vardeman, Stephen B.; Wendelberger, Joanne R.
2005-01-01
There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…
Fractal dimension and the navigational information provided by natural scenes.
Shamsyeh Zahedi, Moosarreza; Zeil, Jochen
2018-01-01
Recent work on virtual reality navigation in humans has suggested that navigational success is inversely correlated with the fractal dimension (FD) of artificial scenes. Here we investigate the generality of this claim by analysing the relationship between the fractal dimension of natural insect navigation environments and a quantitative measure of the navigational information content of natural scenes. We show that the fractal dimension of natural scenes is in general inversely proportional to the information they provide to navigating agents on heading direction as measured by the rotational image difference function (rotIDF). The rotIDF determines the precision and accuracy with which the orientation of a reference image can be recovered or maintained and the range over which a gradient descent in image differences will find the minimum of the rotIDF, that is the reference orientation. However, scenes with similar fractal dimension can differ significantly in the depth of the rotIDF, because FD does not discriminate between the orientations of edges, while the rotIDF is mainly affected by edge orientation parallel to the axis of rotation. We present a new equation for the rotIDF relating navigational information to quantifiable image properties such as contrast to show (1) that for any given scene the maximum value of the rotIDF (its depth) is proportional to pixel variance and (2) that FD is inversely proportional to pixel variance. This contrast dependence, together with scene differences in orientation statistics, explains why there is no strict relationship between FD and navigational information. Our experimental data and their numerical analysis corroborate these results.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni
2017-12-01
The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.
Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q
2017-03-22
Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
The Principle of Energetic Consistency
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.
2009-01-01
A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of energetic consistency implies that, to precisely the extent that growing modes are important in data assimilation, this term is also important.
Gender variance in childhood and sexual orientation in adulthood: a prospective study.
Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T
2013-11-01
Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in adulthood was substantially lower. © 2012 International Society for Sexual Medicine.
van Breukelen, Gerard J P; Candel, Math J J M
2018-06-10
Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Ionic strength and DOC determinations from various freshwater sources to the San Francisco Bay
Hunter, Y.R.; Kuwabara, J.S.
1994-01-01
An exact estimation of dissolved organic carbon (DOC) within the salinity gradient of zinc and copper metals is significant in understanding the limit to which DOC could influence metal speciation. A low-temperature persulfate/oxygen/ultraviolet wet oxidation procedure was utilized for analyzing DOC samples adapted for ionic strength from major freshwater sources of the northern and southern regions of San Francisco Bay. The ionic strength of samples was modified with a chemically defined seawater medium up to 0.7M. Based on the results, a minimum effect of ionic strength on oxidation proficiency for DOC sources to the Bay over an ionic strength gradient of 0.0 to 0.7 M was observed. There was no major impacts of ionic strength on two Suwanee River fulvic acids. In general, the noted effects associated with ionic strength were smaller than the variances seen in the aquatic environment between high- and low-temperature methods.
NASA Astrophysics Data System (ADS)
Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.
2005-05-01
A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
2014-03-27
42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Tsou, Tsung-Shan
2007-03-30
This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.
Raykov, Tenko; Zinbarg, Richard E
2011-05-01
A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.
SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shojaei, M; Dumitru, N; Pella, S
2016-06-15
Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... General Provisions § 142.302 Who can issue a small system variance? A small system variance under this...
14 CFR 91.119 - Minimum safe altitudes: General.
Code of Federal Regulations, 2011 CFR
2011-01-01
... persons, an altitude of 1,000 feet above the highest obstacle within a horizontal radius of 2,000 feet of... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Minimum safe altitudes: General. 91.119... § 91.119 Minimum safe altitudes: General. Except when necessary for takeoff or landing, no person may...
14 CFR 91.119 - Minimum safe altitudes: General.
Code of Federal Regulations, 2010 CFR
2010-01-01
... persons, an altitude of 1,000 feet above the highest obstacle within a horizontal radius of 2,000 feet of... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Minimum safe altitudes: General. 91.119... § 91.119 Minimum safe altitudes: General. Except when necessary for takeoff or landing, no person may...
Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach
Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.
1999-01-01
Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Climate Drivers of Blue Intensity from Two Eastern North American Conifers
NASA Astrophysics Data System (ADS)
Rayback, S. A.; Kilbride, J.; Pontius, J.; Tait, E.; Little, J.
2016-12-01
Gaining a comprehensive understanding of the climatic factors that drive tree radial growth over time is important in the context of global climate change. Herein, we explore minimum blue intensity (BI), a measure of lignin context in the latewood of tree rings, with the objective of developing BI chronologies for two eastern North American conifers to identify and explore climatic drivers and to compare BI-climate relationships to those of tree-ring widths (TRW). Using dendrochronological techniques, Tsuga canadensis and Picea rubens TRW and BI chronologies were developed at Abbey Pond (ABP) and The Cape National Research Area (CAPE), Vermont, USA, respectively. Climate drivers (1901-2010) were investigated using correlation and response function analyses and generalized linear mixed models. The ABP T. canadensis BI model explained the highest amount of variance (R2 = 0.350, adjR2=0.324) with September Tmin and June total percent cloudiness as predictors. The ABP T. canadensis TRW model explained 34% of the variance (R2 = 0.340, adjR2=0.328) with summer total precipitation and June PDSI as predictors. The CAPE P. rubens TRW and BI models explained 31% of the variance (R2 = 0.33, adjR2=0.310), based on p July Tmax, p August Tmean and fall Tmin as predictors, and 7% (R2 = 0.068, adjR2=0.060) based on Spring Tmin as the predictor, respectively. Moving window analyses confirm the moisture sensitivity of T. canadensis TRW and now BI and suggest an extension of the growing season. Similarly, P. rubens TRW responded consistently negative to high growing season temperatures, but TRW and BI benefited from a longer growing season. This study introduces two new BI chronologies, the first from northeastern North America, and highlights shifts underway in tree response to changing climate.
Experimental study on an FBG strain sensor
NASA Astrophysics Data System (ADS)
Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng
2018-01-01
Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
Strong Genetic Overlap Between Executive Functions and Intelligence
Engelhardt, Laura E.; Mann, Frank D.; Briley, Daniel A.; Church, Jessica A.; Harden, K. Paige; Tucker-Drob, Elliot M.
2016-01-01
Executive functions (EFs) are cognitive processes that control, monitor, and coordinate more basic cognitive processes. EFs play instrumental roles in models of complex reasoning, learning, and decision-making, and individual differences in EFs have been consistently linked with individual differences in intelligence. By middle childhood, genetic factors account for a moderate proportion of the variance in intelligence, and these effects increase in magnitude through adolescence. Genetic influences on EFs are very high, even in middle childhood, but the extent to which these genetic influences overlap with those on intelligence is unclear. We examined genetic and environmental overlap between EFs and intelligence in a racially and socioeconomically diverse sample of 811 twins ages 7-15 years (M = 10.91, SD = 1.74) from the Texas Twin Project. A general EF factor representing variance common to inhibition, switching, working memory, and updating domains accounted for substantial proportions of variance in intelligence, primarily via a genetic pathway. General EF continued to have a strong, genetically-mediated association with intelligence even after controlling for processing speed. Residual variation in general intelligence was influenced only by shared and nonshared environmental factors, and there remained no genetic variance in general intelligence that was unique of EF. Genetic variance independent of EF did remain, however, in a more specific perceptual reasoning ability. These results provide evidence that genetic influences on general intelligence are highly overlapping with those on EF. PMID:27359131
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
On optimal current patterns for electrical impedance tomography.
Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D
2005-02-01
We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Doppler color imaging. Principles and instrumentation.
Kremkau, F W
1992-01-01
DCI acquires Doppler-shifted echoes from a cross-section of tissue scanned by an ultrasound beam. These echoes are then presented in color and superimposed on the gray-scale anatomic image of non-Doppler-shifted echoes received during the scan. The flow echoes are assigned colors according to the color map chosen. Usually red, yellow, or white indicates positive Doppler shifts (approaching flow) and blue, cyan, or white indicates negative shifts (receding flow). Green is added to indicate variance (disturbed or turbulent flow). Several pulses (the number is called the ensemble length) are needed to generate a color scan line. Linear, convex, phased, and annular arrays are used to acquire the gray-scale and color-flow information. Doppler color-flow instruments are pulsed-Doppler instruments and are subject to the same limitations, such as Doppler angle dependence and aliasing, as other Doppler instruments. Color controls include gain, TGC, map selection, variance on/off, persistence, ensemble length, color/gray priority. Nyquist limit (PRF), baseline shift, wall filter, and color window angle, location, and size. Doppler color-flow instruments generally have output intensities intermediate between those of gray-scale imaging and pulsed-Doppler duplex instruments. Although there is no known risk with the use of color-flow instruments, prudent practice dictates that they be used for medical indications and with the minimum exposure time and instrument output required to obtain the needed diagnostic information.
Nelson, Jason M; Canivez, Gary L; Watkins, Marley W
2013-06-01
Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
A generalized conditional heteroscedastic model for temperature downscaling
NASA Astrophysics Data System (ADS)
Modarres, R.; Ouarda, T. B. M. J.
2014-11-01
This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.
2010-01-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273
Johnston, Krista Haley Smith; Iarocci, Grace
2017-12-01
Generalized anxiety and depression symptoms may be associated with poorer social outcomes among children with Autism Spectrum Disorder (ASD) without intellectual disability. The goal of this study was to examine whether generalized anxiety and depression symptoms were associated with social competence after accounting for IQ, age, and gender in typically developing children and in children with ASD. Results indicated that for the TD group, generalized anxiety and depression accounted for 38% of the variance in social competence and for children with ASD, they accounted for 29% of the variance in social competence. However, only depression accounted for a significant amount of the variance. The findings underscore the importance of assessing the social impact of internalizing symptoms in children with ASD.
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2013-07-01
Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
2012-09-01
by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B
ERIC Educational Resources Information Center
Johnson, Jim
2017-01-01
A growing number of U.S. business schools now offer an undergraduate degree in international business (IB), for which training in a foreign language is a requirement. However, there appears to be considerable variance in the minimum requirements for foreign language training across U.S. business schools, including the provision of…
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
GIS-based niche modeling for mapping species' habitats
Rotenberry, J.T.; Preston, K.L.; Knick, S.
2006-01-01
Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
NASA Astrophysics Data System (ADS)
Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping
2018-02-01
In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
Generalized Variance Function Applications in Forestry
James Alegria; Charles T. Scott; Charles T. Scott
1991-01-01
Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
Strong genetic overlap between executive functions and intelligence.
Engelhardt, Laura E; Mann, Frank D; Briley, Daniel A; Church, Jessica A; Harden, K Paige; Tucker-Drob, Elliot M
2016-09-01
Executive functions (EFs) are cognitive processes that control, monitor, and coordinate more basic cognitive processes. EFs play instrumental roles in models of complex reasoning, learning, and decision making, and individual differences in EFs have been consistently linked with individual differences in intelligence. By middle childhood, genetic factors account for a moderate proportion of the variance in intelligence, and these effects increase in magnitude through adolescence. Genetic influences on EFs are very high, even in middle childhood, but the extent to which these genetic influences overlap with those on intelligence is unclear. We examined genetic and environmental overlap between EFs and intelligence in a racially and socioeconomically diverse sample of 811 twins ages 7 to 15 years (M = 10.91, SD = 1.74) from the Texas Twin Project. A general EF factor representing variance common to inhibition, switching, working memory, and updating domains accounted for substantial proportions of variance in intelligence, primarily via a genetic pathway. General EF continued to have a strong, genetically mediated association with intelligence even after controlling for processing speed. Residual variation in general intelligence was influenced only by shared and nonshared environmental factors, and there remained no genetic variance in general intelligence that was unique of EF. Genetic variance independent of EF did remain, however, in a more specific perceptual reasoning ability. These results provide evidence that genetic influences on general intelligence are highly overlapping with those on EF. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Optimized Beam Sculpting with Generalized Fringe-rate Filters
NASA Astrophysics Data System (ADS)
Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina
2016-03-01
We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.
Assumption-free estimation of the genetic contribution to refractive error across childhood.
Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy
2015-01-01
Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.
Anthropogenic noise decreases urban songbird diversity and may contribute to homogenization.
Proppe, Darren S; Sturdy, Christopher B; St Clair, Colleen Cassady
2013-04-01
More humans reside in urban areas than at any other time in history. Protected urban green spaces and transportation greenbelts support many species, but diversity in these areas is generally lower than in undeveloped landscapes. Habitat degradation and fragmentation contribute to lowered diversity and urban homogenization, but less is known about the role of anthropogenic noise. Songbirds are especially vulnerable to anthropogenic noise because they rely on acoustic signals for communication. Recent studies suggest that anthropogenic noise reduces the density and reproductive success of some bird species, but that species which vocalize at frequencies above those of anthropogenic noise are more likely to inhabit noisy areas. We hypothesize that anthropogenic noise is contributing to declines in urban diversity by reducing the abundance of select species in noisy areas, and that species with low-frequency songs are those most likely to be affected. To examine this relationship, we calculated the noise-associated change in overall species richness and in abundance for seven common songbird species. After accounting for variance due to vegetative differences, species richness and the abundance of three of seven species were reduced in noisier locations. Acoustic analysis revealed that minimum song frequency was highly predictive of a species' response to noise, with lower minimum song frequencies incurring greater noise-associated reduction in abundance. These results suggest that anthropogenic noise affects some species independently of vegetative conditions, exacerbating the exclusion of some songbird species in otherwise suitable habitat. Minimum song frequency may provide a useful metric to predict how particular species will be affected by noise. In sum, mitigation of noise may enhance habitat suitability for many songbird species, especially for species with songs that include low-frequency elements. © 2012 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi
2002-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to magnetic discontinuities in PBSs. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
NASA Technical Reports Server (NTRS)
Yamauchi, Y.; Suess, Steven T.; Sakurai, T.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to discontinuities. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
ERIC Educational Resources Information Center
Thompson, Bruce
The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…
On zero variance Monte Carlo path-stretching schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lux, I.
1983-08-01
A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game.
Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field
NASA Technical Reports Server (NTRS)
Ghosh, Sanjoy; Roberts, D. Aaron
2010-01-01
We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
Experimental demonstration of quantum teleportation of a squeezed state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takei, Nobuyuki; Aoki, Takao; Yonezawa, Hidehiro
2005-10-15
Quantum teleportation of a squeezed state is demonstrated experimentally. Due to some inevitable losses in experiments, a squeezed vacuum necessarily becomes a mixed state which is no longer a minimum uncertainty state. We establish an operational method of evaluation for quantum teleportation of such a state using fidelity and discuss the classical limit for the state. The measured fidelity for the input state is 0.85{+-}0.05, which is higher than the classical case of 0.73{+-}0.04. We also verify that the teleportation process operates properly for the nonclassical state input and its squeezed variance is certainly transferred through the process. We observemore » the smaller variance of the teleported squeezed state than that for the vacuum state input.« less
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A
2006-01-01
The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.
Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco
2002-01-01
The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...
Zeng, Xing; Chen, Cheng; Wang, Yuanyuan
2012-12-01
In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
[A Review on the Use of Effect Size in Nursing Research].
Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae
2015-10-01
The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.
NASA Astrophysics Data System (ADS)
Moayedi, Maryam; Foo, Yung Kuan; Chai Soh, Yeng
2011-03-01
The minimum-variance filtering problem in networked control systems, where both random measurement transmission delays and packet dropouts may occur, is investigated in this article. Instead of following the many existing results that solve the problem by using probabilistic approaches based on the probabilities of the uncertainties occurring between the sensor and the filter, we propose a non-probabilistic approach by time-stamping the measurement packets. Both single-measurement and multiple measurement packets are studied. We also consider the case of burst arrivals, where more than one packet may arrive between the receiver's previous and current sampling times; the scenario where the control input is non-zero and subject to delays and packet dropouts is examined as well. It is shown that, in such a situation, the optimal state estimate would generally be dependent on the possible control input. Simulations are presented to demonstrate the performance of the various proposed filters.
Interplanetary sector boundaries, 1971 - 1973
NASA Technical Reports Server (NTRS)
Klein, L.; Burlaga, L. F.
1979-01-01
Eighteen interplanetary sector boundary crossings observed at 1 AU by the magnetometer on the IMP-6 spacecraft are discussed. The events were examined on many different time scales ranging from days on either side of the boundary to high resolution measurements of 12.5 vectors per second. Two categories of boundaries were found, one group being relatively thin and the other being thick. In many cases the field vector rotated in a plane from one polarity to the other. Only two of the transitions were null sheets. Using the minimum variance analysis to determine the normals to the plane of rotation, and assuming that this is the same as the normal to the sector boundary surface, it was found that the normals were close to the ecliptic plane. An analysis of tangential discontinuities contained in 4-day periods about the events showed that their orientations were generally not related to the orientations of the sector boundary surface, but rather their characteristics were about the same as those for discontinuities outside the sector boundaries.
Characterization of large price variations in financial markets
NASA Astrophysics Data System (ADS)
Johansen, Anders
2003-06-01
Statistics of drawdowns (loss from the last local maximum to the next local minimum) plays an important role in risk assessment of investment strategies. As they incorporate higher (> two) order correlations, they offer a better measure of real market risks than the variance or other cumulants of daily (or some other fixed time scale) of returns. Previous results have shown that the vast majority of drawdowns occurring on the major financial markets have a distribution which is well represented by a stretched exponential, while the largest drawdowns are occurring with a significantly larger rate than predicted by the bulk of the distribution and should thus be characterized as outliers (Eur. Phys. J. B 1 (1998) 141; J. Risk 2001). In the present analysis, the definition of drawdowns is generalized to coarse-grained drawdowns or so-called ε-drawdowns and a link between such ε- outliers and preceding log-periodic power law bubbles previously identified (Quantitative Finance 1 (2001) 452) is established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, Meghan K.; Argall, Matthew R.; Joyce, Colin J., E-mail: mkl54@wildcats.unh.edu, E-mail: Matthew.Argall@unh.edu, E-mail: cjl46@wildcats.unh.edu
We report observations of low-frequency waves at 1 au by the magnetic field instrument on the Advanced Composition Explorer ( ACE /MAG) and show evidence that they arise due to newborn interstellar pickup He{sup +}. Twenty-five events are studied. They possess the generally predicted attributes: spacecraft-frame frequencies slightly greater than the He{sup +} cyclotron frequency, left-hand polarization in the spacecraft frame, and transverse fluctuations with minimum variance directions that are quasi-parallel to the mean magnetic field. Their occurrence spans the first 18 years of ACE operations, with no more than 3 such observations in any given year. Thus, the eventsmore » are relatively rare. As with past observations by the Ulysses and Voyager spacecraft, we argue that the waves are seen only when the background turbulence is sufficiently weak as to allow for the slow accumulation of wave energy over many hours.« less
Roux, C Z
2009-05-01
Short phylogenetic distances between taxa occur, for example, in studies on ribosomal RNA-genes with slow substitution rates. For consistently short distances, it is proved that in the completely singular limit of the covariance matrix ordinary least squares (OLS) estimates are minimum variance or best linear unbiased (BLU) estimates of phylogenetic tree branch lengths. Although OLS estimates are in this situation equal to generalized least squares (GLS) estimates, the GLS chi-square likelihood ratio test will be inapplicable as it is associated with zero degrees of freedom. Consequently, an OLS normal distribution test or an analogous bootstrap approach will provide optimal branch length tests of significance for consistently short phylogenetic distances. As the asymptotic covariances between branch lengths will be equal to zero, it follows that the product rule can be used in tree evaluation to calculate an approximate simultaneous confidence probability that all interior branches are positive.
Radiation exposure and performance of multiple burn LEO-GEO orbit transfer trajectories
NASA Technical Reports Server (NTRS)
Gorland, S. H.
1985-01-01
Many potential strategies exist for the transfer of spacecraft from low Earth orbit (LEO) to geosynchronous (GEO) orbit. One strategy has generally been utilized, that being a single impulsive burn at perigee and a GEO insertion burn at apogee. Multiple burn strategies were discussed for orbit transfer vehicles (OTVs) but the transfer times and radiation exposure, particularly for potentially manned missions, were used as arguments against those options. Quantitative results concerning the trip time and radiation encountered by multiple burn orbit transfer missions in order to establish the feasibility of manned missions, the vulnerability of electronics, and the shielding requirements are presented. The performance of these multiple burn missions is quantified in terms of the payload and propellant variances from the minimum energy mission transfer. The missions analyzed varied from one to eight perigee burns and ranged from a high thrust, 1 g acceleration, cryogenic hydrogen-oxygen chemical prpulsion system to a continuous burn, 0.001 g acceleration, hydrogen fueled resistojet propulsion system with a trip time of 60 days.
Soave, David; Sun, Lei
2017-09-01
We generalize Levene's test for variance (scale) heterogeneity between k groups for more complex data, when there are sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic χk-12/(k-1) distribution of the generalized scale (gS) test statistic. We then show that the proposed gS test is independent of the generalized location test, under the joint null hypothesis of no mean and no variance heterogeneity. Consequently, we generalize the recently proposed joint location-scale (gJLS) test, valuable in settings where there is an interaction effect but one interacting variable is not available. We evaluate the proposed method via an extensive simulation study and two genetic association application studies. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
ERIC Educational Resources Information Center
Wass, Christopher; Pizzo, Alessandro; Sauce, Bruno; Kawasumi, Yushi; Sturzoiu, Tudor; Ree, Fred; Otto, Tim; Matzel, Louis D.
2013-01-01
A common source of variance (i.e., "general intelligence") underlies an individual's performance across diverse tests of cognitive ability, and evidence indicates that the processing efficacy of working memory may serve as one such source of common variance. One component of working memory, selective attention, has been reported to…
30 CFR 75.1103-3 - Automatic fire sensor and warning device systems; minimum requirements; general.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Automatic fire sensor and warning device systems; minimum requirements; general. 75.1103-3 Section 75.1103-3 Mineral Resources MINE SAFETY AND...-UNDERGROUND COAL MINES Fire Protection § 75.1103-3 Automatic fire sensor and warning device systems; minimum...
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
40 CFR 142.303 - Which size public water systems can receive a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Which size public water systems can receive a small system variance? 142.303 Section 142.303 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General...
ERIC Educational Resources Information Center
Penfield, Randall D.; Algina, James
2006-01-01
One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…
40 CFR 142.304 - For which of the regulatory requirements is a small system variance available?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false For which of the regulatory requirements is a small system variance available? 142.304 Section 142.304 Protection of Environment... REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.304 For which of the regulatory...
40 CFR 142.303 - Which size public water systems can receive a small system variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Which size public water systems can receive a small system variance? 142.303 Section 142.303 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General...
A statistical mechanical theory for a two-dimensional model of water
Urbic, Tomaz; Dill, Ken A.
2010-01-01
We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the “Mercedes-Benz” (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water’s heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water’s large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state. PMID:20550408
A statistical mechanical theory for a two-dimensional model of water
NASA Astrophysics Data System (ADS)
Urbic, Tomaz; Dill, Ken A.
2010-06-01
We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the "Mercedes-Benz" (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water's heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water's large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state.
A statistical mechanical theory for a two-dimensional model of water.
Urbic, Tomaz; Dill, Ken A
2010-06-14
We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the "Mercedes-Benz" (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water's heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water's large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years
NASA Astrophysics Data System (ADS)
Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.
2014-12-01
Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.
40 CFR 264.97 - General ground-water monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... paragraph (i) of this section. (1) A parametric analysis of variance (ANOVA) followed by multiple... mean levels for each constituent. (2) An analysis of variance (ANOVA) based on ranks followed by...
A new Method for Determining the Interplanetary Current-Sheet Local Orientation
NASA Astrophysics Data System (ADS)
Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.
2003-03-01
In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.
Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl
2016-10-01
The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.
Comparison of reproducibility of natural head position using two methods.
Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik
2012-01-01
Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.
2014-01-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180
Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E
2014-04-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Salvucci, Sameena; And Others
This technical report provides the results of a study on the calculation and use of generalized variance functions (GVFs) and design effects for the 1990-91 Schools and Staffing Survey (SASS). The SASS is a periodic integrated system of sample surveys conducted by the National Center for Education Statistics (NCES) that produces sampling variances…
24 CFR 200.933 - Changes in minimum property standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Register. As the changes are made, they will be incorporated into the volumes of the Minimum Property... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Changes in minimum property... Changes in minimum property standards. Changes in the Minimum Property Standards will generally be made...
24 CFR 200.933 - Changes in minimum property standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Register. As the changes are made, they will be incorporated into the volumes of the Minimum Property... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Changes in minimum property... Changes in minimum property standards. Changes in the Minimum Property Standards will generally be made...
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
A New Look at Some Solar Wind Turbulence Puzzles
NASA Technical Reports Server (NTRS)
Roberts, Aaron
2006-01-01
Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
Stream-temperature patterns of the Muddy Creek basin, Anne Arundel County, Maryland
Pluhowski, E.J.
1981-01-01
Using a water-balance equation based on a 4.25-year gaging-station record on North Fork Muddy Creek, the following mean annual values were obtained for the Muddy Creek basin: precipitation, 49.0 inches; evapotranspiration, 28.0 inches; runoff, 18.5 inches; and underflow, 2.5 inches. Average freshwater outflow from the Muddy Creek basin to the Rhode River estuary was 12.2 cfs during the period October 1, 1971, to December 31, 1975. Harmonic equations were used to describe seasonal maximum and minimum stream-temperature patterns at 12 sites in the basin. These equations were fitted to continuous water-temperature data obtained periodically at each site between November 1970 and June 1978. The harmonic equations explain at least 78 percent of the variance in maximum stream temperatures and 81 percent of the variance in minimum temperatures. Standard errors of estimate averaged 2.3C (Celsius) for daily maximum water temperatures and 2.1C for daily minimum temperatures. Mean annual water temperatures developed for a 5.4-year base period ranged from 11.9C at Muddy Creek to 13.1C at Many Fork Branch. The largest variations in stream temperatures were detected at thermograph sites below ponded reaches and where forest coverage was sparse or missing. At most sites the largest variations in daily water temperatures were recorded in April whereas the smallest were in September and October. The low thermal inertia of streams in the Muddy Creek basin tends to amplify the impact of surface energy-exchange processes on short-period stream-temperature patterns. Thus, in response to meteorologic events, wide ranging stream-temperature perturbations of as much as 6C have been documented in the basin. (USGS)
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
Wagenaar, A C; Wolfson, M
1995-01-01
The authors analyzed patterns of criminal and administrative enforcement of the legal minimum age for drinking across 295 counties in four States. Data on all arrests and other actions for liquor law violations from 1988 through 1990 were collected from the Federal Bureau of Investigation Uniform Crime Reporting System, State Uniform Crime Reports, and State Alcohol Beverage Control Agencies. Analytic methods used include Spearman rank-order correlation, single-linkage cluster analysis, and multiple regression modeling. Results confirmed low rates of enforcement of the legal drinking age, particularly for actions against those who sell or provide alcohol to underage youth. More than a quarter of all counties examined had no Alcoholic Beverage Control Agency actions against retailers for sales of alcohol to minors during the three periods studied. Analyses indicate that 58 percent of the county-by-county variance in enforcement of the youth liquor law can be accounted by eight community characteristics. Rate of arrests for general minor crime was strongly related to rate of arrests for violations of the youth liquor law, while the number of law enforcement officers per population was not related to arrests for underage drinking. Raising the legal age for drinking to 21 years had substantial benefits in terms of reduced drinking and reduced automobile crashes among youths, despite low level of enforcement. Potential benefits of active enforcement of minimum drinking age statutes are substantial, particularly if efforts are focused on those who provide alcohol to youth.
ERIC Educational Resources Information Center
Shieh, Gwowen; Jan, Show-Li
2015-01-01
The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
2008-12-01
slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
NASA Astrophysics Data System (ADS)
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
NASA Astrophysics Data System (ADS)
Karl, Thomas R.; Wang, Wei-Chyung; Schlesinger, Michael E.; Knight, Richard W.; Portman, David
1990-10-01
Important surface observations such as the daily maximum and minimum temperature, daily precipitation, and cloud ceilings often have localized characteristics that are difficult to reproduce with the current resolution and the physical parameterizations in state-of-the-art General Circulation climate Models (GCMs). Many of the difficulties can be partially attributed to mismatches in scale, local topography. regional geography and boundary conditions between models and surface-based observations. Here, we present a method, called climatological projection by model statistics (CPMS), to relate GCM grid-point flee-atmosphere statistics, the predictors, to these important local surface observations. The method can be viewed as a generalization of the model output statistics (MOS) and perfect prog (PP) procedures used in numerical weather prediction (NWP) models. It consists of the application of three statistical methods: 1) principle component analysis (FICA), 2) canonical correlation, and 3) inflated regression analysis. The PCA reduces the redundancy of the predictors The canonical correlation is used to develop simultaneous relationships between linear combinations of the predictors, the canonical variables, and the surface-based observations. Finally, inflated regression is used to relate the important canonical variables to each of the surface-based observed variables.We demonstrate that even an early version of the Oregon State University two-level atmospheric GCM (with prescribed sea surface temperature) produces free-atmosphere statistics than can, when standardized using the model's internal means and variances (the MOS-like version of CPMS), closely approximate the observed local climate. When the model data are standardized by the observed free-atmosphere means and variances (the PP version of CPMS), however, the model does not reproduce the observed surface climate as well. Our results indicate that in the MOS-like version of CPMS the differences between the output of a ten-year GCM control run and the surface-based observations are often smaller than the differences between the observations of two ten-year periods. Such positive results suggest that GCMs may already contain important climatological information that can be used to infer the local climate.
Response to selection while maximizing genetic variance in small populations.
Cervantes, Isabel; Gutiérrez, Juan Pablo; Meuwissen, Theo H E
2016-09-20
Rare breeds represent a valuable resource for future market demands. These populations are usually well-adapted, but their low census compromises the genetic diversity and future of these breeds. Since improvement of a breed for commercial traits may also confer higher probabilities of survival for the breed, it is important to achieve good responses to artificial selection. Therefore, efficient genetic management of these populations is essential to ensure that they respond adequately to genetic selection in possible future artificial selection scenarios. Scenarios that maximize the maximum genetic variance in a unique population could be a valuable option. The aim of this work was to study the effect of the maximization of genetic variance to increase selection response and improve the capacity of a population to adapt to a new environment/production system. We simulated a random scenario (A), a full-sib scenario (B), a scenario applying the maximum variance total (MVT) method (C), a MVT scenario with a restriction on increases in average inbreeding (D), a MVT scenario with a restriction on average individual increases in inbreeding (E), and a minimum coancestry scenario (F). Twenty replicates of each scenario were simulated for 100 generations, followed by 10 generations of selection. Effective population size was used to monitor the outcomes of these scenarios. Although the best response to selection was achieved in scenarios B and C, they were discarded because they are unpractical. Scenario A was also discarded because of its low response to selection. Scenario D yielded less response to selection and a smaller effective population size than scenario E, for which response to selection was higher during early generations because of the moderately structured population. In scenario F, response to selection was slightly higher than in Scenario E in the last generations. Application of MVT with a restriction on individual increases in inbreeding resulted in the largest response to selection during early generations, but if inbreeding depression is a concern, a minimum coancestry scenario is then a valuable alternative, in particular for a long-term response to selection.
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng
2017-06-01
The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.
Empirical single sample quantification of bias and variance in Q-ball imaging.
Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A
2018-02-06
The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.
Tackett, Jennifer L.; Lahey, Benjamin B.; Hulle, Carol Van; Waldman, Irwin; Krueger, Robert F.; Rathouz, Paul J.
2014-01-01
Previous research using confirmatory factor analysis to model psychopathology comorbidity supported the hypothesis of a broad general factor (i.e., a “bifactor”; Holzinger & Swineford, 1937) of psychopathology in children, adolescents, and adults, with more specific higher-order internalizing and externalizing factors reflecting additional shared variance in symptoms (Lahey et al., 2012; Lahey, Van Hulle, Singh, Waldman, & Rathouz, 2011). The psychological nature of this general factor has not been explored, however. The current study tests a prediction derived from the spectrum hypothesis of personality and psychopathology, that variance in a general psychopathology bifactor overlaps substantially—at both phenotypic and genetic levels—with the dispositional trait of negative emotionality. Data on psychopathology symptoms and dispositional traits were collected from both parents and youth in a representative sample of 1,569 twin pairs (ages 9–17) from Tennessee. Predictions based on the spectrum hypothesis were supported, with variance in negative emotionality and the general factor overlapping substantially at both phenotypic and etiologic levels. Furthermore, stronger correlations were found between negative emotionality and the general psychopathology factor than among other dispositions and other psychopathology factors. PMID:24364617
Tackett, Jennifer L; Lahey, Benjamin B; van Hulle, Carol; Waldman, Irwin; Krueger, Robert F; Rathouz, Paul J
2013-11-01
Previous research using confirmatory factor analysis to model psychopathology comorbidity has supported the hypothesis of a broad general factor (i.e., a "bifactor"; Holzinger & Swineford, 1937) of psychopathology in children, adolescents, and adults, with more specific higher order internalizing and externalizing factors reflecting additional shared variance in symptoms (Lahey et al., 2012; Lahey, van Hulle, Singh, Waldman, & Rathouz, 2011). The psychological nature of this general factor has not been explored, however. The current study tested a prediction, derived from the spectrum hypothesis of personality and psychopathology, that variance in a general psychopathology bifactor overlaps substantially-at both phenotypic and genetic levels-with the dispositional trait of negative emotionality. Data on psychopathology symptoms and dispositional traits were collected from both parents and youth in a representative sample of 1,569 twin pairs (ages 9-17 years) from Tennessee. Predictions based on the spectrum hypothesis were supported, with variance in negative emotionality and the general factor overlapping substantially at both phenotypic and etiologic levels. Furthermore, stronger correlations were found between negative emotionality and the general psychopathology factor than among other dispositions and other psychopathology factors. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Watkins, Marley W
2010-12-01
The structure of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; D. Wechsler, 2003a) was analyzed via confirmatory factor analysis among a national sample of 355 students referred for psychoeducational evaluation by 93 school psychologists from 35 states. The structure of the WISC-IV core battery was best represented by four first-order factors as per D. Wechsler (2003b), plus a general intelligence factor in a direct hierarchical model. The general factor was the predominate source of variation among WISC-IV subtests, accounting for 48% of the total variance and 75% of the common variance. The largest 1st-order factor, Processing Speed, only accounted for 6.1% total and 9.5% common variance. Given these explanatory contributions, recommendations favoring interpretation of the 1st-order factor scores over the general intelligence score appear to be misguided.
30 CFR 202.53 - Minimum royalty.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 202.53 Section 202.53 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT ROYALTIES Oil, Gas, and OCS Sulfur, General § 202.53 Minimum royalty. For leases that provide for minimum royalty...
Plasma dynamics on current-carrying magnetic flux tubes
NASA Technical Reports Server (NTRS)
Swift, Daniel W.
1992-01-01
A 1D numerical simulation is used to investigate the evolution of a plasma in a current-carrying magnetic flux tube of variable cross section. A large potential difference, parallel to the magnetic field, is applied across the domain. The result is that density minimum tends to deepen, primarily in the cathode end, and the entire potential drop becomes concentrated across the region of density minimum. The evolution of the simulation shows some sensitivity to particle boundary conditions, but the simulations inevitably evolve into a final state with a nearly stationary double layer near the cathode end. The simulation results are at sufficient variance with observations that it appears unlikely that auroral electrons can be explained by a simple process of acceleration through a field-aligned potential drop.
Code of Federal Regulations, 2010 CFR
2010-01-01
... a level that is sufficient to ensure the continued financial viability of the Enterprise and that equals or exceeds the minimum capital requirement contained in this subpart A. ... AND SOUNDNESS CAPITAL Minimum Capital § 1750.1 General. The regulation contained in this subpart A...
26 CFR 1.55-1 - Alternative minimum taxable income.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative minimum taxable income. 1.55-1... TAXES Tax Surcharge § 1.55-1 Alternative minimum taxable income. (a) General rule for computing alternative minimum taxable income. Except as otherwise provided by statute, regulations, or other published...
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...
41 CFR 50-201.1101 - Minimum wages.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wages. 50-201... Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 201-GENERAL REGULATIONS § 50-201.1101 Minimum wages. Determinations of prevailing minimum wages or changes therein will be published in the Federal Register by the...
NASA Astrophysics Data System (ADS)
Li, Zhi; Jin, Jiming
2017-11-01
Projected hydrological variability is important for future resource and hazard management of water supplies because changes in hydrological variability can cause more disasters than changes in the mean state. However, climate change scenarios downscaled from Earth System Models (ESMs) at single sites cannot meet the requirements of distributed hydrologic models for simulating hydrological variability. This study developed multisite multivariate climate change scenarios via three steps: (i) spatial downscaling of ESMs using a transfer function method, (ii) temporal downscaling of ESMs using a single-site weather generator, and (iii) reconstruction of spatiotemporal correlations using a distribution-free shuffle procedure. Multisite precipitation and temperature change scenarios for 2011-2040 were generated from five ESMs under four representative concentration pathways to project changes in streamflow variability using the Soil and Water Assessment Tool (SWAT) for the Jing River, China. The correlation reconstruction method performed realistically for intersite and intervariable correlation reproduction and hydrological modeling. The SWAT model was found to be well calibrated with monthly streamflow with a model efficiency coefficient of 0.78. It was projected that the annual mean precipitation would not change, while the mean maximum and minimum temperatures would increase significantly by 1.6 ± 0.3 and 1.3 ± 0.2 °C; the variance ratios of 2011-2040 to 1961-2005 were 1.15 ± 0.13 for precipitation, 1.15 ± 0.14 for mean maximum temperature, and 1.04 ± 0.10 for mean minimum temperature. A warmer climate was predicted for the flood season, while the dry season was projected to become wetter and warmer; the findings indicated that the intra-annual and interannual variations in the future climate would be greater than in the current climate. The total annual streamflow was found to change insignificantly but its variance ratios of 2011-2040 to 1961-2005 increased by 1.25 ± 0.55. Streamflow variability was predicted to become greater over most months on the seasonal scale because of the increased monthly maximum streamflow and decreased monthly minimum streamflow. The increase in streamflow variability was attributed mainly to larger positive contributions from increased precipitation variances rather than negative contributions from increased mean temperatures.
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
NASA Astrophysics Data System (ADS)
Marchi, Susanna; Guidotti, Diego; Ricciolini, Massimo; Petacchi, Ruggero
2016-11-01
Insect dynamics depend on temperature patterns, and therefore, global warming may lead to increasing frequencies and intensities of insect outbreaks. The aim of this work was to analyze the dynamics of the olive fruit fly, Bactrocera oleae (Rossi), in Tuscany (Italy). We profited from long-term records of insect infestation and weather data available from the regional database and agrometeorological network. We tested whether the analysis of 13 years of monitoring campaigns can be used as basis for prediction models of B. oleae infestation. We related the percentage of infestation observed in the first part of the host-pest interaction and throughout the whole year to agrometeorological indices formulated for different time periods. A two-step approach was adopted to inspect the effect of weather on infestation: generalized linear model with a binomial error distribution and principal component regression to reduce the number of the agrometeorological factors and remove their collinearity. We found a consistent relationship between the degree of infestation and the temperature-based indices calculated for the previous period. The relationship was stronger with the minimum temperature of winter season. Higher infestation was observed in years following warmer winters. The temperature of the previous winter and spring explained 66 % of variance of early-season infestation. The temperature of previous winter and spring, and current summer, explained 72 % of variance of total annual infestation. These results highlight the importance of multiannual monitoring activity to fully understand the dynamics of B. oleae populations at a regional scale.
Marchi, Susanna; Guidotti, Diego; Ricciolini, Massimo; Petacchi, Ruggero
2016-11-01
Insect dynamics depend on temperature patterns, and therefore, global warming may lead to increasing frequencies and intensities of insect outbreaks. The aim of this work was to analyze the dynamics of the olive fruit fly, Bactrocera oleae (Rossi), in Tuscany (Italy). We profited from long-term records of insect infestation and weather data available from the regional database and agrometeorological network. We tested whether the analysis of 13 years of monitoring campaigns can be used as basis for prediction models of B. oleae infestation. We related the percentage of infestation observed in the first part of the host-pest interaction and throughout the whole year to agrometeorological indices formulated for different time periods. A two-step approach was adopted to inspect the effect of weather on infestation: generalized linear model with a binomial error distribution and principal component regression to reduce the number of the agrometeorological factors and remove their collinearity. We found a consistent relationship between the degree of infestation and the temperature-based indices calculated for the previous period. The relationship was stronger with the minimum temperature of winter season. Higher infestation was observed in years following warmer winters. The temperature of the previous winter and spring explained 66 % of variance of early-season infestation. The temperature of previous winter and spring, and current summer, explained 72 % of variance of total annual infestation. These results highlight the importance of multiannual monitoring activity to fully understand the dynamics of B. oleae populations at a regional scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyson, Jon
2009-06-15
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
Identification, Characterization, and Utilization of Adult Meniscal Progenitor Cells
2017-11-01
approach including row scaling and Ward’s minimum variance method was chosen. This analysis revealed two groups of four samples each. For the selected...articular cartilage in an ovine model. Am J Sports Med. 2008;36(5):841-50. 7. Deshpande BR, Katz JN, Solomon DH, Yelin EH, Hunter DJ, Messier SP, et al...Miosge1,* 1Tissue Regeneration Work Group , Department of Prosthodontics, Medical Faculty, Georg-August-University, 37075 Goettingen, Germany 2Institute of
2017-12-01
carefully to ensure only minimum information needed for effective management control is requested. Requires cost-benefit analysis and PM...baseline offers metrics that highlights performance treads and program variances. This information provides Program Managers and higher levels of...The existing training philosophy is effective only if the managers using the information have well trained and experienced personnel that can
Ways to improve your correlation functions
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.
The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey
2004-05-10
aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well
NASA Astrophysics Data System (ADS)
Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang
2018-03-01
This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Demographics of an ornate box turtle population experiencing minimal human-induced disturbances
Converse, S.J.; Iverson, J.B.; Savidge, J.A.
2005-01-01
Human-induced disturbances may threaten the viability of many turtle populations, including populations of North American box turtles. Evaluation of the potential impacts of these disturbances can be aided by long-term studies of populations subject to minimal human activity. In such a population of ornate box turtles (Terrapene ornata ornata) in western Nebraska, we examined survival rates and population growth rates from 1981-2000 based on mark-recapture data. The average annual apparent survival rate of adult males was 0.883 (SE = 0.021) and of adult females was 0.932 (SE = 0.014). Minimum winter temperature was the best of five climate variables as a predictor of adult survival. Survival rates were highest in years with low minimum winter temperatures, suggesting that global warming may result in declining survival. We estimated an average adult population growth rate (????) of 1.006 (SE = 0.065), with an estimated temporal process variance (????2) of 0.029 (95% CI = 0.005-0.176). Stochastic simulations suggest that this mean and temporal process variance would result in a 58% probability of a population decrease over a 20-year period. This research provides evidence that, unless unknown density-dependent mechanisms are operating in the adult age class, significant human disturbances, such as commercial harvest or turtle mortality on roads, represent a potential risk to box turtle populations. ?? 2005 by the Ecological Society of America.
24 CFR 891.145 - Owner deposit (Minimum Capital Investment).
Code of Federal Regulations, 2010 CFR
2010-04-01
... General Program Requirements § 891.145 Owner deposit (Minimum Capital Investment). As a Minimum Capital... Investment shall be one-half of one percent (0.5%) of the HUD-approved capital advance, not to exceed $25,000. ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Owner deposit (Minimum Capital...
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General provisions. All monkfish caught by vessels issued a valid Federal monkfish permit must meet the minimum fish...
Ullrich, Susann; Aryani, Arash; Kraxenberger, Maria; Jacobs, Arthur M.; Conrad, Markus
2017-01-01
The literary genre of poetry is inherently related to the expression and elicitation of emotion via both content and form. To explore the nature of this affective impact at an extremely basic textual level, we collected ratings on eight different general affective meaning scales—valence, arousal, friendliness, sadness, spitefulness, poeticity, onomatopoeia, and liking—for 57 German poems (“die verteidigung der wölfe”) which the contemporary author H. M. Enzensberger had labeled as either “friendly,” “sad,” or “spiteful.” Following Jakobson's (1960) view on the vivid interplay of hierarchical text levels, we used multiple regression analyses to explore the specific influences of affective features from three different text levels (sublexical, lexical, and inter-lexical) on the perceived general affective meaning of the poems using three types of predictors: (1) Lexical predictor variables capturing the mean valence and arousal potential of words; (2) Inter-lexical predictors quantifying peaks, ranges, and dynamic changes within the lexical affective content; (3) Sublexical measures of basic affective tone according to sound-meaning correspondences at the sublexical level (see Aryani et al., 2016). We find the lexical predictors to account for a major amount of up to 50% of the variance in affective ratings. Moreover, inter-lexical and sublexical predictors account for a large portion of additional variance in the perceived general affective meaning. Together, the affective properties of all used textual features account for 43–70% of the variance in the affective ratings and still for 23–48% of the variance in the more abstract aesthetic ratings. In sum, our approach represents a novel method that successfully relates a prominent part of variance in perceived general affective meaning in this corpus of German poems to quantitative estimates of affective properties of textual components at the sublexical, lexical, and inter-lexical level. PMID:28123376
Wavefront aberrations of x-ray dynamical diffraction beams.
Liao, Keliang; Hong, Youli; Sheng, Weifan
2014-10-01
The effects of dynamical diffraction in x-ray diffractive optics with large numerical aperture render the wavefront aberrations difficult to describe using the aberration polynomials, yet knowledge of them plays an important role in a vast variety of scientific problems ranging from optical testing to adaptive optics. Although the diffraction theory of optical aberrations was established decades ago, its application in the area of x-ray dynamical diffraction theory (DDT) is still lacking. Here, we conduct a theoretical study on the aberration properties of x-ray dynamical diffraction beams. By treating the modulus of the complex envelope as the amplitude weight function in the orthogonalization procedure, we generalize the nonrecursive matrix method for the determination of orthonormal aberration polynomials, wherein Zernike DDT and Legendre DDT polynomials are proposed. As an example, we investigate the aberration evolution inside a tilted multilayer Laue lens. The corresponding Legendre DDT polynomials are obtained numerically, which represent balanced aberrations yielding minimum variance of the classical aberrations of an anamorphic optical system. The balancing of classical aberrations and their standard deviations are discussed. We also present the Strehl ratio of the primary and secondary balanced aberrations.
A Two-Phase Model for Trade Matching and Price Setting in Double Auction Water Markets
NASA Astrophysics Data System (ADS)
Xu, Tingting; Zheng, Hang; Zhao, Jianshi; Liu, Yicheng; Tang, Pingzhong; Yang, Y. C. Ethan; Wang, Zhongjing
2018-04-01
Delivery in water markets is generally operated by agencies through channel systems, which imposes physical and institutional market constraints. Many water markets allow water users to post selling and buying requests on a board. However, water users may not be able to choose efficiently when the information (including the constraints) becomes complex. This study proposes an innovative two-phase model to address this problem based on practical experience in China. The first phase seeks and determines the optimal assignment that maximizes the incremental improvement of the system's social welfare according to the bids and asks in the water market. The second phase sets appropriate prices under constraints. Applying this model to China's Xiying Irrigation District shows that it can improve social welfare more than the current "pool exchange" method can. Within the second phase, we evaluate three objective functions (minimum variance, threshold-based balance, and two-sided balance), which represent different managerial goals. The threshold-based balance function should be preferred by most users, while the two-sided balance should be preferred by players who post extreme prices.
Ichthyoplankton abundance and variance in a large river system concerns for long-term monitoring
Holland-Bartels, Leslie E.; Dewey, Michael R.; Zigler, Steven J.
1995-01-01
System-wide spatial patterns of ichthyoplankton abundance and variability were assessed in the upper Mississippi and lower Illinois rivers to address the experimental design and statistical confidence in density estimates. Ichthyoplankton was sampled from June to August 1989 in primary milieus (vegetated and non-vegated backwaters and impounded areas, main channels and main channel borders) in three navigation pools (8, 13 and 26) of the upper Mississippi River and in a downstream reach of the Illinois River. Ichthyoplankton densities varied among stations of similar aquatic landscapes (milieus) more than among subsamples within a station. An analysis of sampling effort indicated that the collection of single samples at many stations in a given milieu type is statistically and economically preferable to the collection of multiple subsamples at fewer stations. Cluster analyses also revealed that stations only generally grouped by their preassigned milieu types. Pilot studies such as this can define station groupings and sources of variation beyond an a priori habitat classification. Thus the minimum intensity of sampling required to achieve a desired statistical confidence can be identified before implementing monitoring efforts.
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Kaufman, Michelle R; Patel, Eshan U; Dam, Kim H; Packman, Zoe R; Van Lith, Lynn M; Hatzold, Karin; Marcell, Arik V; Mavhu, Webster; Kahabuka, Catherine; Mahlasela, Lusanda; Njeuhmeli, Emmanuel; Seifert Ahanda, Kim; Ncube, Getrude; Lija, Gissenge; Bonnecwe, Collen; Tobian, Aaron A R
2018-04-03
The minimum package of voluntary medical male circumcision (VMMC) services, as defined by the World Health Organization, includes human immunodeficiency virus (HIV) testing, HIV prevention counseling, screening/treatment for sexually transmitted infections, condom promotion, and the VMMC procedure. The current study aimed to assess whether adolescents received these key elements. Quantitative surveys were conducted among male adolescents aged 10-19 years (n = 1293) seeking VMMC in South Africa, Tanzania, and Zimbabwe. We used a summative index score of 8 self-reported binary items to measure receipt of important elements of the World Health Organization-recommended HIV minimum package and the US President's Emergency Plan for AIDS Relief VMMC recommendations. Counseling sessions were observed for a subset of adolescents (n = 44). To evaluate factors associated with counseling content, we used Poisson regression models with generalized estimating equations and robust variance estimation. Although counseling included VMMC benefits, little attention was paid to risks, including how to identify complications, what to do if they arise, and why avoiding sex and masturbation could prevent complications. Overall, older adolescents (aged 15-19 years) reported receiving more items in the recommended minimum package than younger adolescents (aged 10-14 years; adjusted β, 0.17; 95% confidence interval [CI], .12-.21; P < .001). Older adolescents were also more likely to report receiving HIV test education and promotion (42.7% vs 29.5%; adjusted prevalence ratio [aPR], 1.53; 95% CI, 1.16-2.02) and a condom demonstration with condoms to take home (16.8% vs 4.4%; aPR, 2.44; 95% CI, 1.30-4.58). No significant age differences appeared in reports of explanations of VMMC risks and benefits or uptake of HIV testing. These self-reported findings were confirmed during counseling observations. Moving toward age-equitable HIV prevention services during adolescent VMMC likely requires standardizing counseling content, as there are significant age differences in HIV prevention content received by adolescents.
The scope and control of attention: Sources of variance in working memory capacity.
Chow, Michael; Conway, Andrew R A
2015-04-01
Working memory capacity is a strong positive predictor of many cognitive abilities, across various domains. The pattern of positive correlations across domains has been interpreted as evidence for a unitary source of inter-individual differences in behavior. However, recent work suggests that there are multiple sources of variance contributing to working memory capacity. The current study (N = 71) investigates individual differences in the scope and control of attention, in addition to the number and resolution of items maintained in working memory. Latent variable analyses indicate that the scope and control of attention reflect independent sources of variance and each account for unique variance in general intelligence. Also, estimates of the number of items maintained in working memory are consistent across tasks and related to general intelligence whereas estimates of resolution are task-dependent and not predictive of intelligence. These results provide insight into the structure of working memory, as well as intelligence, and raise new questions about the distinction between number and resolution in visual short-term memory.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
[Analysis of variance of repeated data measured by water maze with SPSS].
Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang
2007-01-01
To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P
42 CFR 84.206 - Particulate tests; respirators with filters; minimum requirements; general.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Particulate tests; respirators with filters... filters; minimum requirements; general. (a) Three respirators with cartridges containing, or having attached to them, filters for protection against particulates will be tested in accordance with the...
42 CFR 84.206 - Particulate tests; respirators with filters; minimum requirements; general.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Particulate tests; respirators with filters... filters; minimum requirements; general. (a) Three respirators with cartridges containing, or having attached to them, filters for protection against particulates will be tested in accordance with the...
42 CFR 84.206 - Particulate tests; respirators with filters; minimum requirements; general.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Particulate tests; respirators with filters... filters; minimum requirements; general. (a) Three respirators with cartridges containing, or having attached to them, filters for protection against particulates will be tested in accordance with the...
42 CFR 84.206 - Particulate tests; respirators with filters; minimum requirements; general.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Particulate tests; respirators with filters... filters; minimum requirements; general. (a) Three respirators with cartridges containing, or having attached to them, filters for protection against particulates will be tested in accordance with the...
42 CFR 84.206 - Particulate tests; respirators with filters; minimum requirements; general.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Particulate tests; respirators with filters... filters; minimum requirements; general. (a) Three respirators with cartridges containing, or having attached to them, filters for protection against particulates will be tested in accordance with the...
Need for Tolerances and Tolerance Exemptions for Minimum Risk Pesticides
The ingredients used in minimum risk products used on food, food crops, food contact surfaces, or animal feed commodities generally have a tolerance or tolerance exemption. Learn about tolerances and tolerance exemptions for minimum risk ingredients.
Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E
2013-04-01
Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
14 CFR 91.1053 - Crewmember experience.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Fractional Ownership... and ratings: (1) Total flight time for all pilots: (i) Pilot in command—A minimum of 1,500 hours. (ii) Second in command—A minimum of 500 hours. (2) For multi-engine turbine-powered fixed-wing and powered...
General Requirements and Minimum Standards.
ERIC Educational Resources Information Center
2003
This publication provides the General Requirements and Minimum Standards developed by the National Court Reporters Association's Council on Approved Student Education (CASE). They are the same for all court reporter education programs, whether an institution is applying for approval for the first time or for a new grant of approval. The first…
29 CFR 780.1 - General scope of the Act.
Code of Federal Regulations, 2010 CFR
2010-07-01
... application which establishes minimum wage, overtime pay, equal pay, and child labor requirements that apply... for compliance and, in the event of violations, to supervise the payment of unpaid minimum wages or... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL...
Estimators for Two Measures of Association for Set Correlation.
ERIC Educational Resources Information Center
Cohen, Jacob; Nee, John C. M.
1984-01-01
Two measures of association between sets of variables have been proposed for set correlation: the proportion of generalized variance, and the proportion of additionive variance. Because these measures are strongly positively biased, approximate expected values and estimators of these measures are derived and checked. (Author/BW)
Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter
2014-12-01
Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
Variance-based selection may explain general mating patterns in social insects.
Rueppell, Olav; Johnson, Nels; Rychtár, Jan
2008-06-23
Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.
Future mission studies: Preliminary comparisons of solar flux models
NASA Technical Reports Server (NTRS)
Ashrafi, S.
1991-01-01
The results of comparisons of the solar flux models are presented. (The wavelength lambda = 10.7 cm radio flux is the best indicator of the strength of the ionizing radiations such as solar ultraviolet and x-ray emissions that directly affect the atmospheric density thereby changing the orbit lifetime of satellites. Thus, accurate forecasting of solar flux F sub 10.7 is crucial for orbit determination of spacecrafts.) The measured solar flux recorded by National Oceanic and Atmospheric Administration (NOAA) is compared against the forecasts made by Schatten, MSFC, and NOAA itself. The possibility of a combined linear, unbiased minimum-variance estimation that properly combines all three models into one that minimizes the variance is also discussed. All the physics inherent in each model are combined. This is considered to be the dead-end statistical approach to solar flux forecasting before any nonlinear chaotic approach.
NASA Astrophysics Data System (ADS)
Sun, Xuelian; Liu, Zixian
2016-02-01
In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.
Demodulation of messages received with low signal to noise ratio
NASA Astrophysics Data System (ADS)
Marguinaud, A.; Quignon, T.; Romann, B.
The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.
An adaptive technique for estimating the atmospheric density profile during the AE mission
NASA Technical Reports Server (NTRS)
Argentiero, P.
1973-01-01
A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.
Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system
NASA Astrophysics Data System (ADS)
Bai, Jianbo; Li, Yang; Chen, Jianhao
2018-02-01
The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.
NASA Astrophysics Data System (ADS)
Rezeau, L.; Belmont, G.; Manuzzo, R.; Aunai, N.; Dargent, J.
2018-01-01
We explore the structure of the magnetopause using a crossing observed by the Magnetospheric Multiscale (MMS) spacecraft on 16 October 2015. Several methods (minimum variance analysis, BV method, and constant velocity analysis) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical, and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyze more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new single spacecraft method, called LNA (local normal analysis) for determining the varying normal, and we compare the results so obtained with those coming from the multispacecraft minimum directional derivative (MDD) tool developed by Shi et al. (2005). This last method gives the dimensionality of the magnetic variations from multipoint measurements and also allows estimating the direction of the local normal when the variations are locally 1-D. This study shows that the magnetopause does include approximate one-dimensional substructures but also two- and three-dimensional structures. It also shows that the dimensionality of the magnetic variations can differ from the variations of other fields so that, at some places, the magnetic field can have a 1-D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. A generalization of the MDD tool is proposed.
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.
Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L
2013-08-13
United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming
Karlinger, M.R.; Skrivan, James A.
1981-01-01
Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
Signal-dependent noise determines motor planning
NASA Astrophysics Data System (ADS)
Harris, Christopher M.; Wolpert, Daniel M.
1998-08-01
When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
Dangers in Using Analysis of Covariance Procedures.
ERIC Educational Resources Information Center
Campbell, Kathleen T.
Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…
Variance in Math Achievement Attributable to Visual Cognitive Constructs
ERIC Educational Resources Information Center
Oehlert, Jeremy J.
2012-01-01
Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 27 2012-07-01 2012-07-01 false Non-waste determinations and variances from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking...
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2010 CFR
2010-07-01
... from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.30 Non-waste determinations and variances from classification as a solid waste. In...
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2011 CFR
2011-07-01
... from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.30 Non-waste determinations and variances from classification as a solid waste. In...
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Non-waste determinations and variances from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking...
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Non-waste determinations and variances from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking...
Distribution of lod scores in oligogenic linkage analysis.
Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J
2001-01-01
In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.
30 CFR 75.1103-3 - Automatic fire sensor and warning device systems; minimum requirements; general.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Automatic fire sensor and warning device...-UNDERGROUND COAL MINES Fire Protection § 75.1103-3 Automatic fire sensor and warning device systems; minimum requirements; general. Automatic fire sensor and warning device systems installed in belt haulageways of...
30 CFR 75.1103-3 - Automatic fire sensor and warning device systems; minimum requirements; general.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Automatic fire sensor and warning device...-UNDERGROUND COAL MINES Fire Protection § 75.1103-3 Automatic fire sensor and warning device systems; minimum requirements; general. Automatic fire sensor and warning device systems installed in belt haulageways of...
30 CFR 75.1103-3 - Automatic fire sensor and warning device systems; minimum requirements; general.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Automatic fire sensor and warning device...-UNDERGROUND COAL MINES Fire Protection § 75.1103-3 Automatic fire sensor and warning device systems; minimum requirements; general. Automatic fire sensor and warning device systems installed in belt haulageways of...
30 CFR 75.1103-3 - Automatic fire sensor and warning device systems; minimum requirements; general.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Automatic fire sensor and warning device...-UNDERGROUND COAL MINES Fire Protection § 75.1103-3 Automatic fire sensor and warning device systems; minimum requirements; general. Automatic fire sensor and warning device systems installed in belt haulageways of...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Dust, fume, and mist tests; respirators with filters; minimum requirements; general. 84.1158 Section 84.1158 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Dust, fume, and mist tests; respirators with filters; minimum requirements; general. 84.1158 Section 84.1158 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Dust, fume, and mist tests; respirators with filters; minimum requirements; general. 84.1158 Section 84.1158 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Dust, fume, and mist tests; respirators with filters; minimum requirements; general. 84.1158 Section 84.1158 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Dust, fume, and mist tests; respirators with filters; minimum requirements; general. 84.1158 Section 84.1158 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES...
14 CFR Appendix G to Part 91 - Operations in Reduced Vertical Separation Minimum (RVSM) Airspace
Code of Federal Regulations, 2012 CFR
2012-01-01
... flight planned route through the appropriate flight planning information sources. (b) No person may show..., DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT..., air traffic control (ATC) separates aircraft by a minimum of 1,000 feet vertically between flight...
14 CFR Appendix G to Part 91 - Operations in Reduced Vertical Separation Minimum (RVSM) Airspace
Code of Federal Regulations, 2014 CFR
2014-01-01
... flight planned route through the appropriate flight planning information sources. (b) No person may show..., DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT..., air traffic control (ATC) separates aircraft by a minimum of 1,000 feet vertically between flight...
14 CFR Appendix G to Part 91 - Operations in Reduced Vertical Separation Minimum (RVSM) Airspace
Code of Federal Regulations, 2011 CFR
2011-01-01
... flight planned route through the appropriate flight planning information sources. (b) No person may show..., DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT..., air traffic control (ATC) separates aircraft by a minimum of 1,000 feet vertically between flight...
14 CFR Appendix G to Part 91 - Operations in Reduced Vertical Separation Minimum (RVSM) Airspace
Code of Federal Regulations, 2013 CFR
2013-01-01
... flight planned route through the appropriate flight planning information sources. (b) No person may show..., DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT..., air traffic control (ATC) separates aircraft by a minimum of 1,000 feet vertically between flight...
14 CFR Appendix G to Part 91 - Operations in Reduced Vertical Separation Minimum (RVSM) Airspace
Code of Federal Regulations, 2010 CFR
2010-01-01
... flight planned route through the appropriate flight planning information sources. (b) No person may show..., DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT..., air traffic control (ATC) separates aircraft by a minimum of 1,000 feet vertically between flight...
More about arc-polarized structures in the solar wind
NASA Astrophysics Data System (ADS)
Haaland, S.; Sonnerup, B.; Paschmann, G.
2012-05-01
We report results from a Cluster-based study of the properties of 28 arc-polarized magnetic structures (also called rotational discontinuities) in the solar wind. These Alfvénic events were selected from the database created and analyzed by Knetter (2005) by use of criteria chosen to eliminate ambiguous cases. His studies showed that standard, four-spacecraft timing analysis in most cases lacks sufficient accuracy to identify the small normal magnetic field components expected to accompany such structures, leaving unanswered the question of their existence. Our study aims to break this impasse. By careful application of minimum variance analysis of the magnetic field (MVAB) from each individual spacecraft, we show that, in most cases, a small but significantly non-zero magnetic field component was present in the direction perpendicular to the discontinuity. In the very few cases where this component was found to be large, examination revealed that MVAB had produced an unusual and unexplained orientation of the normal vector. On the whole, MVAB shows that many verifiable rotational discontinuities (Bn ≠ 0) exist in the solar wind and that their eigenvalue ratio (EVR = intermediate/minimum variance) can be extremely large (up to EVR = 400). Each of our events comprises four individual spacecraft crossings. The events include 17 ion-polarized cases and 11 electron-polarized ones. Fifteen of the ion events have widths ranging from 9 to 21 ion inertial lengths, with two outliers at 46 and 54. The electron-polarized events are generally thicker: nine cases fall in the range 20-71 ion inertial lengths, with two outliers at 9 and 13. In agreement with theoretical predictions from a one-dimensional, ideal, Hall-MHD description (Sonnerup et al., 2010), the ion-polarized events show a small depression in field magnitude, while the electron-polarized ones tend to show a small enhancement. This effect was also predicted by Wu and Lee (2000). Judging only from the sense of the plasma flow across our DDs, their propagation appears to be sunward as often as anti-sunward. However, we argue that this result can be misleading as a consequence of the possible presence of magnetic islands within the DDs. How the rotational discontinuities come into existence, how they evolve with time, and what roles they play in the solar wind remain open questions.
NASA Astrophysics Data System (ADS)
Li, T.; Leblanc, T.; McDermid, S.; Wu, D. L.
2007-12-01
The JPL Rayleigh lidars at Mauna Loa Observatory (MLO), HI (19.5N, 155.6W) and Table Mountain Observatory (TMO), CA (34.4N, 117.7W) have been operated for the regular nighttime data acquisition of temperature since 1994 and 1989 respectively. Using the monthly mean temperature vertical profiles observed by the JPL lidars (35- 85km) and nearby radiosondes (5-30km), and with the linear regression analysis, we are able to extract the temperature trend, solar cycle, El Nino South Oscillation (ENSO), and Quasi-Biennial Oscillation (QBO) signals from the troposphere to upper mesosphere over MLO and TMO. The temperature trends show different behaviors at two sites, minor trend at MLO, but more negative trend at TMO. The solar cycle responses in temperature are generally positive above the middle stratosphere at both sites, but negative response at MLO and positive at TMO below. During the El Nino events, the warmer temperatures in the troposphere and upper mesosphere, and the colder temperatures in the stratosphere and lower mesosphere were observed at MLO and almost visa verse at TMO. The significant QBO oscillations were observed in the stratosphere with amplitudes of ~2-3K and with clearer downward phase progression at MLO than that at TMO. The mesospheric QBO near 75-85km is clearly present at both sites with amplitude of ~2K and with longer vertical wavelength than that in stratosphere. In addition, we calculated the GW variances using lidar temperature profiles with 30min and 1km resolutions in the upper stratosphere (38-50km) and lower mesosphere (50-62km), and nearby radiosondes in the lower stratosphere (18-30km). The monthly mean GW variances clearly show an annual oscillation with a maximum in the winter and minimum in the summer. The QBO signature could be clearly seen in the lower stratosphere. In the upper stratosphere, a longer period oscillation (~5-6 years) with maxima in 2000-2001 and 2006 was revealed to synchronize with the solar maximum and minimum. No clear signature of GW activity in the lower mesosphere could be associated to that in the upper stratosphere, suggesting that part of gravity waves may either dissipated or reflected when crossing the stratopause region.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
The attentional blink in typically developing and reading-disabled children.
de Groot, Barry J A; van den Bos, Kees P; van der Meulen, Bieuwe F; Minnaert, Alexander E M G
2015-11-01
This study's research question was whether selective visual attention, and specifically the attentional blink (AB) as operationalized by a dual target rapid serial visual presentation (RSVP) task, can explain individual differences in word reading (WR) and reading-related phonological performances in typically developing children and reading-disabled subgroups. A total of 407 Dutch school children (Grades 3-6) were classified either as typically developing (n = 302) or as belonging to one of three reading-disabled subgroups: reading disabilities only (RD-only, n = 69), both RD and attention problems (RD+ADHD, n = 16), or both RD and a specific language impairment (RD+SLI, n = 20). The RSVP task employed alphanumeric stimuli that were presented in two blocks. Standardized Dutch tests were used to measure WR, phonemic awareness (PA), and alphanumeric rapid naming (RAN). Results indicate that, controlling for PA and RAN performance, general RSVP task performance contributes significant unique variance to the prediction of WR. Specifically, consistent group main effects for the parameter of AB(minimum) were found, whereas there were no AB-specific effects (i.e., AB(width) and AB(amplitude)) except for the RD+SLI group. Finally, there was a group by measurement interaction, indicating that the RD-only and comorbid groups are differentially sensitive for prolonged testing sessions. These results suggest that more general factors involved in RSVP processing may explain the group differences found. Copyright © 2015 Elsevier Inc. All rights reserved.
Metacognition Beliefs and General Health in Predicting Alexithymia in Students
Babaei, Samaneh; Varandi, Shahryar Ranjbar; Hatami, Zohre; Gharechahi, Maryam
2016-01-01
Objectives: The present study was conducted to investigate the role of metacognition beliefs and general health in alexithymia in Iranian students. Methods: This descriptive and correlational study included 200 participants of high schools students, selected randomly from students of two cities (Sari and Dargaz), Iran. Metacognitive Strategies Questionnaire (MCQ-30); the General Health Questionnaire (GHQ) and Farsi Version of the Toronto Alexithymia Scale (TAS-20) were used for gathering the data. Using the Pearson’s correlation method and regression, the data were analyzed. Results: The findings indicated significant positive relationships between alexithymia and all subscales of general health. The highest correlation was between alexithymia and anxiety subscale (r=0.36, P<0.01). Also, there was a significant negative relationship between alexithymia and some metacognitive strategies. The highest significant negative relationship was seen between alexithymia and the sub-scale of risk uncontrollability (r=-0.359, P < 0.01). Based on the results of multiple regressions, three predictors explained 21% of the variance (R2=0. 21, F=7.238, P<0.01). It was found that anxiety subscale of General Health significantly predicted 13% of the variance of alexithymia (β=0.36, P<0.01) and risk uncontrollability subscale of Metacognition beliefs predicted about 8% of the variance of alexithymia (β=-0.028, P<0.01). Conclusions: The findings demonstrated that metacognition beliefs and general health had important role in predicting of alexithymia in students. PMID:26383206
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
30 CFR 1202.53 - Minimum royalty.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Minimum royalty. 1202.53 Section 1202.53 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR Natural Resources Revenue ROYALTIES Oil, Gas, and OCS Sulfur, General § 1202.53 Minimum royalty. For leases that...
24 CFR 984.105 - Minimum program size.
Code of Federal Regulations, 2010 CFR
2010-04-01
... DEVELOPMENT SECTION 8 AND PUBLIC HOUSING FAMILY SELF-SUFFICIENCY PROGRAM General § 984.105 Minimum program... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Minimum program size. 984.105 Section 984.105 Housing and Urban Development Regulations Relating to Housing and Urban Development...
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)
ERIC Educational Resources Information Center
Steyn, H. S., Jr.; Ellis, S. M.
2009-01-01
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
Kirkpatrick, Robert M; McGue, Matt; Iacono, William G
2015-03-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research.
Kirkpatrick, Robert M.; McGue, Matt; Iacono, William G.
2015-01-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES—an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975
3D facial landmarks: Inter-operator variability of manual annotation
2014-01-01
Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436
Estimation of stable boundary-layer height using variance processing of backscatter lidar data
NASA Astrophysics Data System (ADS)
Saeed, Umar; Rocadenbosch, Francesc
2017-04-01
Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.
2012-01-01
Background Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related traits such as mammary disease traits in dairy cattle. Methods Data on progeny means of six traits related to mastitis resistance in dairy cattle (general mastitis resistance and five pathogen-specific mastitis resistance traits) were analyzed using a bivariate Bayesian SNP-based genomic model with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level, per chromosome, and in regions of 100 SNP on a chromosome. Results Genomic proportions of the total variance differed between traits. Genomic correlations were lower than pedigree-based genetic correlations and they were highest between general mastitis and pathogen-specific traits because of the part-whole relationship between these traits. The chromosome-wise genomic proportions of the total variance differed between traits, with some chromosomes explaining higher or lower values than expected in relation to chromosome size. Few chromosomes showed pleiotropic effects and only chromosome 19 had a clear effect on all traits, indicating the presence of QTL with a general effect on mastitis resistance. The region-wise patterns of genomic variances differed between traits. Peaks indicating QTL were identified but were not very distinctive because a common prior for the marker effects was used. There was a clear difference in the region-wise patterns of genomic correlation among combinations of traits, with distinctive peaks indicating the presence of pleiotropic QTL. Conclusions The results show that it is possible to estimate, genome-wide and region-wise genomic (co)variances of mastitis resistance traits in dairy cattle using multivariate genomic models. PMID:22640006
Measuring the Power Spectrum with Peculiar Velocities
NASA Astrophysics Data System (ADS)
Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-01-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
Power spectrum estimation from peculiar velocity catalogues
NASA Astrophysics Data System (ADS)
Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-09-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi
2016-03-01
This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Milosavljevic, Stephan; McBride, David I; Bagheri, Nasser; Vasiljev, Radivoj M; Mani, Ramakrishnan; Carman, Allan B; Rehn, Borje
2011-04-01
The purpose of this study was to determine exposure to whole-body vibration (WBV) and mechanical shock in rural workers who use quad bikes and to explore how personal, physical, and workplace characteristics influence exposure. A seat pad mounted triaxial accelerometer and data logger recorded full workday vibration and shock data from 130 New Zealand rural workers. Personal, physical, and workplace characteristics were gathered using a modified version of the Whole Body Vibration Health Surveillance Questionnaire. WBVs and mechanical shocks were analysed in accordance with the International Standardization for Organization (ISO 2631-1 and ISO 2631-5) standards and are presented as vibration dose value (VDV) and mechanical shock (S(ed)) exposures. VDV(Z) consistently exceeded European Union (Guide to good practice on whole body vibration. Directive 2002/44/EC on minimum health and safety, European Commission Directorate General Employment, Social Affairs and Equal Opportunities. 2006) guideline exposure action thresholds with some workers exceeding exposure limit thresholds. Exposure to mechanical shock was also evident. Increasing age had the strongest (negative) association with vibration and shock exposure with body mass index (BMI) having a similar but weaker effect. Age, daily driving duration, dairy farming, and use of two rear shock absorbers created the strongest multivariate model explaining 33% of variance in VDV(Z). Only age and dairy farming combined to explain 17% of the variance for daily mechanical shock. Twelve-month prevalence for low back pain was highest at 57.7% and lowest for upper back pain (13.8%). Personal (age and BMI), physical (shock absorbers and velocity), and workplace characteristics (driving duration and dairy farming) suggest that a mix of engineered workplace and behavioural interventions is required to reduce this level of exposure to vibration and shock.
Wu, Wenzheng; Ye, Wenli; Wu, Zichao; Geng, Peng; Wang, Yulei; Zhao, Ji
2017-01-01
The success of the 3D-printing process depends upon the proper selection of process parameters. However, the majority of current related studies focus on the influence of process parameters on the mechanical properties of the parts. The influence of process parameters on the shape-memory effect has been little studied. This study used the orthogonal experimental design method to evaluate the influence of the layer thickness H, raster angle θ, deformation temperature Td and recovery temperature Tr on the shape-recovery ratio Rr and maximum shape-recovery rate Vm of 3D-printed polylactic acid (PLA). The order and contribution of every experimental factor on the target index were determined by range analysis and ANOVA, respectively. The experimental results indicated that the recovery temperature exerted the greatest effect with a variance ratio of 416.10, whereas the layer thickness exerted the smallest effect on the shape-recovery ratio with a variance ratio of 4.902. The recovery temperature exerted the most significant effect on the maximum shape-recovery rate with the highest variance ratio of 1049.50, whereas the raster angle exerted the minimum effect with a variance ratio of 27.163. The results showed that the shape-memory effect of 3D-printed PLA parts depended strongly on recovery temperature, and depended more weakly on the deformation temperature and 3D-printing parameters. PMID:28825617
Genetic parameters of legendre polynomials for first parity lactation curves.
Pool, M H; Janss, L L; Meuwissen, T H
2000-11-01
Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
An application of the LC-LSTM framework to the self-esteem instability case.
Alessandri, Guido; Vecchione, Michele; Donnellan, Brent M; Tisak, John
2013-10-01
The present research evaluates the stability of self-esteem as assessed by a daily version of the Rosenberg (Society and the adolescent self-image, Princeton University Press, Princeton, 1965) general self-esteem scale (RGSE). The scale was administered to 391 undergraduates for five consecutive days. The longitudinal data were analyzed using the integrated LC-LSTM framework that allowed us to evaluate: (1) the measurement invariance of the RGSE, (2) its stability and change across the 5-day assessment period, (3) the amount of variance attributable to stable and transitory latent factors, and (4) the criterion-related validity of these factors. Results provided evidence for measurement invariance, mean-level stability, and rank-order stability of daily self-esteem. Latent state-trait analyses revealed that variances in scores of the RGSE can be decomposed into six components: stable self-esteem (40 %), ephemeral (or temporal-state) variance (36 %), stable negative method variance (9 %), stable positive method variance (4 %), specific variance (1 %) and random error variance (10 %). Moreover, latent factors associated with daily self-esteem were associated with measures of depression, implicit self-esteem, and grade point average.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
NASA Astrophysics Data System (ADS)
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.
Zapko-Willmes, Alexandra; Kandler, Christian
2018-01-01
The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.
40 CFR 131.13 - General policies.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS Establishment of Water Quality Standards § 131.13 General policies. States may, at their discretion, include in their State standards, policies generally affecting their application and implementation, such as mixing zones, low flows and variances. Such policies are subject to EPA review and...
Code of Federal Regulations, 2010 CFR
2010-10-01
...; (b) The calculated tank plating thickness, including any corrosion allowance, must be the minimum thickness without a negative plate tolerance; and (c) The minimum tank plating thickness must not be less...
14 CFR 93.307 - Minimum flight altitudes.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Minimum flight altitudes. 93.307 Section 93...) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES Special Flight Rules in the Vicinity of Grand Canyon National Park, AZ § 93.307 Minimum flight altitudes. Except in an emergency, or if...
14 CFR 93.307 - Minimum flight altitudes.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 2 2012-01-01 2012-01-01 false Minimum flight altitudes. 93.307 Section 93...) AIR TRAFFIC AND GENERAL OPERATING RULES SPECIAL AIR TRAFFIC RULES Special Flight Rules in the Vicinity of Grand Canyon National Park, AZ § 93.307 Minimum flight altitudes. Except in an emergency, or if...
The Minimum Wage, Restaurant Prices, and Labor Market Structure
ERIC Educational Resources Information Center
Aaronson, Daniel; French, Eric; MacDonald, James
2008-01-01
Using store-level and aggregated Consumer Price Index data, we show that restaurant prices rise in response to minimum wage increases under several sources of identifying variation. We introduce a general model of employment determination that implies minimum wage hikes cause prices to rise in competitive labor markets but potentially fall in…
40 CFR 131.6 - Minimum requirements for water quality standards submission.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Minimum requirements for water quality standards submission. 131.6 Section 131.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS General Provisions § 131.6 Minimum requirements for water quality standards submission. The...
40 CFR 131.6 - Minimum requirements for water quality standards submission.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Minimum requirements for water quality standards submission. 131.6 Section 131.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS General Provisions § 131.6 Minimum requirements for water quality standards submission. The...
Marshall, Andrew J; Evanovich, Emma K; David, Sarah Jo; Mumma, Gregory H
2018-01-17
High comorbidity rates among emotional disorders have led researchers to examine transdiagnostic factors that may contribute to shared psychopathology. Bifactor models provide a unique method for examining transdiagnostic variables by modelling the common and unique factors within measures. Previous findings suggest that the bifactor model of the Depression Anxiety and Stress Scale (DASS) may provide a method for examining transdiagnostic factors within emotional disorders. This study aimed to replicate the bifactor model of the DASS, a multidimensional measure of psychological distress, within a US adult sample and provide initial estimates of the reliability of the general and domain-specific factors. Furthermore, this study hypothesized that Worry, a theorized transdiagnostic variable, would show stronger relations to general emotional distress than domain-specific subscales. Confirmatory factor analysis was used to evaluate the bifactor model structure of the DASS in 456 US adult participants (279 females and 177 males, mean age 35.9 years) recruited online. The DASS bifactor model fitted well (CFI = 0.98; RMSEA = 0.05). The General Emotional Distress factor accounted for most of the reliable variance in item scores. Domain-specific subscales accounted for modest portions of reliable variance in items after accounting for the general scale. Finally, structural equation modelling indicated that Worry was strongly predicted by the General Emotional Distress factor. The DASS bifactor model is generalizable to a US community sample and General Emotional Distress, but not domain-specific factors, strongly predict the transdiagnostic variable Worry.
VizieR Online Data Catalog: AGNs in submm-selected Lockman Hole galaxies (Serjeant+, 2010)
NASA Astrophysics Data System (ADS)
Serjeant, S.; Negrello, M.; Pearson, C.; Mortier, A.; Austermann, J.; Aretxaga, I.; Clements, D.; Chapman, S.; Dye, S.; Dunlop, J.; Dunne, L.; Farrah, D.; Hughes, D.; Lee, H. M.; Matsuhara, H.; Ibar, E.; Im, M.; Jeong, W.-S.; Kim, S.; Oyabu, S.; Takagi, T.; Wada, T.; Wilson, G.; Vaccari, M.; Yun, M.
2013-11-01
We present a comparison of the SCUBA half degree extragalactic survey (SHADES) at 450μm, 850μm and 1100μm with deep guaranteed time 15μm AKARI FU-HYU survey data and Spitzer guaranteed time data at 3.6-24μm in the Lockman hole east. The AKARI data was analysed using bespoke software based in part on the drizzling and minimum-variance matched filtering developed for SHADES, and was cross-calibrated against ISO fluxes. (2 data files).
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Predictors of father-son communication about sexuality.
Lehr, Sally T; Demi, Alice S; Dilorio, Colleen; Facteau, Jeffrey
2005-05-01
Examining the factors that influence adolescents' sexual behaviors is crucial for understanding why they often engage in risky sexual behaviors. Using social cognitive theory, we examined predictors of father-son communication about sexuality. Fathers (N=155) of adolescent sons completed a survey measuring 12 variables, including self-efficacy and outcome expectations. We found that (a) son's pubertal development, father's sex-based values, father's education; father's communication with his father, outcome expectations, and general communication accounted for 36% of the variance in information sharing communication and (b) son's pubertal development, outcome expectations, general communication, and father-son contact accounted for 20% of the variance in values sharing communication. Study findings can aid professionals in designing guidelines for programs to promote father-son general communication and sex-based communication.
Tang, Yongqiang
2017-12-01
Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
NASA Astrophysics Data System (ADS)
Beiden, Sergey V.; Wagner, Robert F.; Campbell, Gregory; Metz, Charles E.; Chan, Heang-Ping; Nishikawa, Robert M.; Schnall, Mitchell D.; Jiang, Yulei
2001-06-01
In recent years, the multiple-reader, multiple-case (MRMC) study paradigm has become widespread for receiver operating characteristic (ROC) assessment of systems for diagnostic imaging and computer-aided diagnosis. We review how MRMC data can be analyzed in terms of the multiple components of the variance (case, reader, interactions) observed in those studies. Such information is useful for the design of pivotal studies from results of a pilot study and also for studying the effects of reader training. Recently, several of the present authors have demonstrated methods to generalize the analysis of multiple variance components to the case where unaided readers of diagnostic images are compared with readers who receive the benefit of a computer assist (CAD). For this case it is necessary to model the possibility that several of the components of variance might be reduced when readers incorporate the computer assist, compared to the unaided reading condition. We review results of this kind of analysis on three previously published MRMC studies, two of which were applications of CAD to diagnostic mammography and one was an application of CAD to screening mammography. The results for the three cases are seen to differ, depending on the reader population sampled and the task of interest. Thus, it is not possible to generalize a particular analysis of variance components beyond the tasks and populations actually investigated.
NASA Astrophysics Data System (ADS)
Reynders, Edwin P. B.; Langley, Robin S.
2018-08-01
The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khaleghi, Mohammad Reza; Varvani, Javad
2018-02-01
Complex and variable nature of the river sediment yield caused many problems in estimating the long-term sediment yield and problems input into the reservoirs. Sediment Rating Curves (SRCs) are generally used to estimate the suspended sediment load of the rivers and drainage watersheds. Since the regression equations of the SRCs are obtained by logarithmic retransformation and have a little independent variable in this equation, they also overestimate or underestimate the true sediment load of the rivers. To evaluate the bias correction factors in Kalshor and Kashafroud watersheds, seven hydrometric stations of this region with suitable upstream watershed and spatial distribution were selected. Investigation of the accuracy index (ratio of estimated sediment yield to observed sediment yield) and the precision index of different bias correction factors of FAO, Quasi-Maximum Likelihood Estimator (QMLE), Smearing, and Minimum-Variance Unbiased Estimator (MVUE) with LSD test showed that FAO coefficient increases the estimated error in all of the stations. Application of MVUE in linear and mean load rating curves has not statistically meaningful effects. QMLE and smearing factors increased the estimated error in mean load rating curve, but that does not have any effect on linear rating curve estimation.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
NASA Astrophysics Data System (ADS)
Kim, Y.; Lee, C.; Kim, J.; Choi, J.; Jee, G.
2010-12-01
We have analyzed wind data from individual meteor echoes detected by a meteor radar at King Sejong Station, Antarctica to measure gravity wave activity in the mesopause region. Wind data in the meteor altitudes has been obtained routinely by the meteor radar since its installation in March 2007. The mean variances in the wind data that were filtered for large scale motions (mean winds and tides) can be regarded as the gravity wave activity. Monthly mean gravity wave activities show strong seasonal and height dependences in the altitude range of 80 to 100 km. The gravity wave activities except summer monotonically increase with altitude, which is expected since decreasing atmospheric densities cause wave amplitudes to increase. During summer (Dec. - Feb.) the height profiles of gravity wave activities show a minimum near 90 - 95 km, which may be due to different zonal wind and strong wind shear near 80 - 95 km. Our gravity wave activities are generally stronger than those of the Rothera station, implying sensitive dependency on location. The difference may be related to gravity wave sources in the lower atmosphere near Antarctic vortex.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Variability of temperature properties over Kenya based on observed and reanalyzed datasets
NASA Astrophysics Data System (ADS)
Ongoma, Victor; Chen, Haishan; Gao, Chujie; Sagero, Phillip Obaigwa
2017-08-01
Updated information on trends of climate extremes is central in the assessment of climate change impacts. This work examines the trends in mean, diurnal temperature range (DTR), maximum and minimum temperatures, 1951-2012 and the recent (1981-2010) extreme temperature events over Kenya. The study utilized daily observed and reanalyzed monthly mean, minimum, and maximum temperature datasets. The analysis was carried out based on a set of nine indices recommended by the Expert Team on Climate Change Detection and Indices (ETCCDI). The trend of the mean and the extreme temperature was determined using Mann-Kendall rank test, linear regression analysis, and Sen's slope estimator. December-February (DJF) season records high temperature while June-August (JJA) experiences the least temperature. The observed rate of warming is + 0.15 °C/decade. However, DTR does not show notable annual trend. Both seasons show an overall warming trend since the early 1970s with abrupt and significant changes happening around the early 1990s. The warming is more significant in the highland regions as compared to their lowland counterparts. There is increase variance in temperature. The percentage of warm days and warm nights is observed to increase, a further affirmation of warming. This work is a synoptic scale study that exemplifies how seasonal and decadal analyses, together with the annual assessments, are important in the understanding of the temperature variability which is vital in vulnerability and adaptation studies at a local/regional scale. However, following the quality of observed data used herein, there remains need for further studies on the subject using longer and more data to avoid generalizations made in this study.
46 CFR 111.60-4 - Minimum cable conductor size.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4...-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-4 Minimum cable conductor size. Each cable conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...
46 CFR 111.60-4 - Minimum cable conductor size.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4...-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-4 Minimum cable conductor size. Each cable conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...
Minimum length from quantum mechanics and classical general relativity.
Calmet, Xavier; Graesser, Michael; Hsu, Stephen D H
2004-11-19
We derive fundamental limits on measurements of position, arising from quantum mechanics and classical general relativity. First, we show that any primitive probe or target used in an experiment must be larger than the Planck length lP. This suggests a Planck-size minimum ball of uncertainty in any measurement. Next, we study interferometers (such as LIGO) whose precision is much finer than the size of any individual components and hence are not obviously limited by the minimum ball. Nevertheless, we deduce a fundamental limit on their accuracy of order lP. Our results imply a device independent limit on possible position measurements.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Studying Variance in the Galactic Ultra-compact Binary Population
NASA Astrophysics Data System (ADS)
Larson, Shane L.; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed
Balk, B.; Elder, K.; Baron, Jill S.
1998-01-01
Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff. In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado. Geostatistics and classical statistics were used to estimate SWE distribution across the watershed. Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances. Snow densities were spatially modeled through regression analysis. Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE. The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths. Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.
Robertson, David S; Prevost, A Toby; Bowden, Jack
2016-09-30
Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Additive-Multiplicative Approximation of Genotype-Environment Interaction
Gimelfarb, A.
1994-01-01
A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113
Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.
2013-01-01
This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731
NASA Astrophysics Data System (ADS)
Asanuma, Jun
Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original hypothesis by Panofsky and McCormick that the local scaling in terms of the local buoyancy flux defines the lower bound of the moments.
Efficiency and large deviations in time-asymmetric stochastic heat engines
Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...
2014-10-24
In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less
Jager, Justin; Bornstein, Marc H; Putnick, Diane L; Hendricks, Charlene
2012-06-01
Using the McMaster Family Assessment Device (Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's "unique perspective" or nonshared, idiosyncratic view of the family. We used a modified multitrait-multimethod confirmatory factor analysis that (a) isolated for each family member's 6 reports of family dysfunction the nonshared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by 1 or more family members and (b) extracted common variance across each family member's set of nonshared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. In addition, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these "unique perspectives" reflect about the family are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Jager, Justin; Bornstein, Marc H.; Diane, L. Putnick; Hendricks, Charlene
2012-01-01
Using the Family Assessment Device (FAD; Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's “unique perspective” or non-shared, idiosyncratic view of the family. To do so we used a modified multitrait-multimethod confirmatory factor analysis that (1) isolated for each family member's six reports of family dysfunction the non-shared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by one or more family members and (2) extracted common variance across each family member's set of non-shared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. Additionally, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these “unique perspectives” reflect about the family are discussed. PMID:22545933
NASA Technical Reports Server (NTRS)
Chapman, Dean R
1952-01-01
A theoretical investigation is made of the airfoil profile for minimum pressure drag at zero lift in supersonic flow. In the first part of the report a general method is developed for calculating the profile having the least pressure drag for a given auxiliary condition, such as a given structural requirement or a given thickness ratio. The various structural requirements considered include bending strength, bending stiffness, torsional strength, and torsional stiffness. No assumption is made regarding the trailing-edge thickness; the optimum value is determined in the calculations as a function of the base pressure. To illustrate the general method, the optimum airfoil, defined as the airfoil having minimum pressure drag for a given auxiliary condition, is calculated in a second part of the report using the equations of linearized supersonic flow.
Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T
2013-12-11
The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.
Zwaanswijk, Wendy; Veen, Violaine C; van Geel, Mitch; Andershed, Henrik; Vedder, Paul
2017-08-01
The current study examines how the bifactor model of the Youth Psychopathic Traits Inventory (YPI) is related to conduct problems in a sample of Dutch adolescents (N = 2,874; 43% female). It addresses to what extent the YPI dimensions explain variance over and above a General Psychopathy factor (i.e., one factor related to all items) and how the general factor and dimensional factors are related to conduct problems. Group differences in these relations for gender, ethnic background, and age were examined. Results showed that the general factor is most important, but dimensions explain variance over and above the general factor. The general factor, and Affective and Lifestyle dimensions, of the YPI were positively related to conduct problems, whereas the Interpersonal dimension was not, after taking the general factor into account. However, across gender, ethnic background, and age, different dimensions were related to conduct problems over and above the general factor. This suggests that all 3 dimensions should be assessed when examining the psychopathy construct. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke
2017-07-01
A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.
Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping
2016-02-01
Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field.
Non-stationary internal tides observed with satellite altimetry
NASA Astrophysics Data System (ADS)
Ray, R. D.; Zaron, E. D.
2011-09-01
Temporal variability of the internal tide is inferred from a 17-year combined record of Topex/Poseidon and Jason satellite altimeters. A global sampling of along-track sea-surface height wavenumber spectra finds that non-stationary variance is generally 25% or less of the average variance at wavenumbers characteristic of mode-1 tidal internal waves. With some exceptions the non-stationary variance does not exceed 0.25 cm2. The mode-2 signal, where detectable, contains a larger fraction of non-stationary variance, typically 50% or more. Temporal subsetting of the data reveals interannual variability barely significant compared with tidal estimation error from 3-year records. Comparison of summer vs. winter conditions shows only one region of noteworthy seasonal changes, the northern South China Sea. Implications for the anticipated SWOT altimeter mission are briefly discussed.
Non-Stationary Internal Tides Observed with Satellite Altimetry
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Zaron, E. D.
2011-01-01
Temporal variability of the internal tide is inferred from a 17-year combined record of Topex/Poseidon and Jason satellite altimeters. A global sampling of along-track sea-surface height wavenumber spectra finds that non-stationary variance is generally 25% or less of the average variance at wavenumbers characteristic of mode-l tidal internal waves. With some exceptions the non-stationary variance does not exceed 0.25 sq cm. The mode-2 signal, where detectable, contains a larger fraction of non-stationary variance, typically 50% or more. Temporal subsetting of the data reveals interannual variability barely significant compared with tidal estimation error from 3-year records. Comparison of summer vs. winter conditions shows only one region of noteworthy seasonal changes, the northern South China Sea. Implications for the anticipated SWOT altimeter mission are briefly discussed.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
Kaufman, Michelle R; Patel, Eshan U; Dam, Kim H; Packman, Zoe R; Van Lith, Lynn M; Hatzold, Karin; Marcell, Arik V; Mavhu, Webster; Kahabuka, Catherine; Mahlasela, Lusanda; Njeuhmeli, Emmanuel; Seifert Ahanda, Kim; Ncube, Getrude; Lija, Gissenge; Bonnecwe, Collen; Tobian, Aaron A R
2018-01-01
Abstract Background The minimum package of voluntary medical male circumcision (VMMC) services, as defined by the World Health Organization, includes human immunodeficiency virus (HIV) testing, HIV prevention counseling, screening/treatment for sexually transmitted infections, condom promotion, and the VMMC procedure. The current study aimed to assess whether adolescents received these key elements. Methods Quantitative surveys were conducted among male adolescents aged 10–19 years (n = 1293) seeking VMMC in South Africa, Tanzania, and Zimbabwe. We used a summative index score of 8 self-reported binary items to measure receipt of important elements of the World Health Organization–recommended HIV minimum package and the US President’s Emergency Plan for AIDS Relief VMMC recommendations. Counseling sessions were observed for a subset of adolescents (n = 44). To evaluate factors associated with counseling content, we used Poisson regression models with generalized estimating equations and robust variance estimation. Results Although counseling included VMMC benefits, little attention was paid to risks, including how to identify complications, what to do if they arise, and why avoiding sex and masturbation could prevent complications. Overall, older adolescents (aged 15–19 years) reported receiving more items in the recommended minimum package than younger adolescents (aged 10–14 years; adjusted β, 0.17; 95% confidence interval [CI], .12–.21; P < .001). Older adolescents were also more likely to report receiving HIV test education and promotion (42.7% vs 29.5%; adjusted prevalence ratio [aPR], 1.53; 95% CI, 1.16–2.02) and a condom demonstration with condoms to take home (16.8% vs 4.4%; aPR, 2.44; 95% CI, 1.30–4.58). No significant age differences appeared in reports of explanations of VMMC risks and benefits or uptake of HIV testing. These self-reported findings were confirmed during counseling observations. Conclusions Moving toward age-equitable HIV prevention services during adolescent VMMC likely requires standardizing counseling content, as there are significant age differences in HIV prevention content received by adolescents. PMID:29617776
26 CFR 1.410(a)-3 - Minimum age and service conditions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 5 2011-04-01 2011-04-01 false Minimum age and service conditions. 1.410(a)-3...(a)-3 Minimum age and service conditions. (a) General rule. Except as provided by paragraph (b) or (c... of— (1) Age 25. The date on which the employee attains the age of 25; or (2) One year of service. The...
26 CFR 1.410(a)-3 - Minimum age and service conditions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Minimum age and service conditions. 1.410(a)-3...(a)-3 Minimum age and service conditions. (a) General rule. Except as provided by paragraph (b) or (c... of— (1) Age 25. The date on which the employee attains the age of 25; or (2) One year of service. The...
26 CFR 1.410(a)-3 - Minimum age and service conditions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 5 2014-04-01 2014-04-01 false Minimum age and service conditions. 1.410(a)-3...(a)-3 Minimum age and service conditions. (a) General rule. Except as provided by paragraph (b) or (c... of— (1) Age 25. The date on which the employee attains the age of 25; or (2) One year of service. The...
26 CFR 1.410(a)-3 - Minimum age and service conditions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 5 2013-04-01 2013-04-01 false Minimum age and service conditions. 1.410(a)-3...(a)-3 Minimum age and service conditions. (a) General rule. Except as provided by paragraph (b) or (c... of— (1) Age 25. The date on which the employee attains the age of 25; or (2) One year of service. The...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2013 CFR
2013-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2014 CFR
2014-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2012 CFR
2012-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
Decuyper, Mieke; De Bolle, Marleen; De Fruyt, Filip; De Clercq, Barbara
2011-10-01
Associations between callous-unemotional traits and general and maladaptive personality dimensions are examined in adolescence. More specifically, it was investigated to what extent general and maladaptive personality dimensions can account for the variance in callous-unemotional (CU) scores. Adolescents (N = 509) and their mothers completed the Inventory of Callous-Unemotional Traits (ICU; Frick, 2003), the Hierarchical Personality Inventory for Children (HiPIC; Mervielde & De Fruyt, 1999, 2002), and the Dimensional Personality Symptom Item Pool (DIPSI; De Clercq, De Fruyt, Van Leeuwen, & Mervielde, 2006). Both personality measures accounted for substantial variance in ICU scores and the overall CU profile in terms of the HiPIC and DIPSI was consistent with psychopathy conceptualizations and consistent across informant. Implications for the assessment of early externalizing trait pathology are discussed.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei
2018-01-01
In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.
The microcomputer scientific software series 3: general linear model--analysis of variance.
Harold M. Rauscher
1985-01-01
A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...
The reliability of multidimensional neuropsychological measures: from alpha to omega.
Watkins, Marley W
To demonstrate that Coefficient omega, a model-based estimate, is more a more appropriate index of reliability than coefficient alpha for the multidimensional scales that are commonly employed by neuropsychologists. As an illustration, a structural model of an overarching general factor and four first-order factors for the WAIS-IV based on the standardization sample of 2200 participants was identified and omega coefficients were subsequently computed for WAIS-IV composite scores. Alpha coefficients were ≥ .90 and omega coefficients ranged from .75 to .88 for WAIS-IV factor index scores, indicating that the blend of general and group factor variance in each index score created a reliable multidimensional composite. However, the amalgam of variance from general and group factors did not allow the precision of Full Scale IQ (FSIQ) and factor index scores to be disentangled. In contrast, omega hierarchical coefficients were low for all four factor index scores (.10-.41), indicating that most of the reliable variance of each factor index score was due to the general intelligence factor. In contrast, the omega hierarchical coefficient for the FSIQ score was .84. Meaningful interpretation of WAIS-IV factor index scores as unambiguous indicators of group factors is imprecise, thereby fostering unreliable identification of neurocognitive strengths and weaknesses, whereas the WAIS-IV FSIQ score can be interpreted as a reliable measure of general intelligence. It was concluded that neuropsychologists should base their clinical decisions on reliable scores as indexed by coefficient omega.
Pereira, Sara; Katzmarzyk, Peter T; Gomes, Thayse Natacha; Souza, Michele; Chaves, Raquel N; Santos, Fernanda K Dos; Santos, Daniel; Hedeker, Donald; Maia, José A R
2017-06-01
Somatotype is a complex trait influenced by different genetic and environmental factors as well as by other covariates whose effects are still unclear. To (1) estimate siblings' resemblance in their general somatotype; (2) identify sib-pair (brother-brother (BB), sister-sister (SS), brother-sister (BS)) similarities in individual somatotype components; (3) examine the degree to which between and within variances differ among sib-ships; and (4) investigate the effects of physical activity (PA) and family socioeconomic status (SES) on these relationships. The sample comprises 1058 Portuguese siblings (538 females) aged 9-20 years. Somatotype was calculated using the Health-Carter method, while PA and SES information was obtained by questionnaire. Multi-level modelling was done in SuperMix software. Older subjects showed the lowest values for endomorphy and mesomorphy, but the highest values for ectomorphy; and more physically active subjects showed the highest values for mesomorphy. In general, the familiality of somatotype was moderate (ρ = 0.35). Same-sex siblings had the strongest resemblance (endomorphy: ρ SS > ρ BB > ρ BS ; mesomorphy: ρ BB = ρ SS > ρ BS ; ectomorphy: ρ BB > ρ SS > ρ BS ). For the ectomorphy and mesomorphy components, BS pairs showed the highest between sib-ship variance, but the lowest within sib-ship variance; while for endomorphy BS showed the lowest between and within sib-ship variances. These results highlight the significant familial effects on somatotype and the complexity of the role of familial resemblance in explaining variance in somatotypes.
NASA Astrophysics Data System (ADS)
Kitterød, Nils-Otto
2017-08-01
Unconsolidated sediment cover thickness (D) above bedrock was estimated by using a publicly available well database from Norway, GRANADA. General challenges associated with such databases typically involve clustering and bias. However, if information about the horizontal distance to the nearest bedrock outcrop (L) is included, does the spatial estimation of D improve? This idea was tested by comparing two cross-validation results: ordinary kriging (OK) where L was disregarded; and co-kriging (CK) where cross-covariance between D and L was included. The analysis showed only minor differences between OK and CK with respect to differences between estimation and true values. However, the CK results gave in general less estimation variance compared to the OK results. All observations were declustered and transformed to standard normal probability density functions before estimation and back-transformed for the cross-validation analysis. The semivariogram analysis gave correlation lengths for D and L of approx. 10 and 6 km. These correlations reduce the estimation variance in the cross-validation analysis because more than 50 % of the data material had two or more observations within a radius of 5 km. The small-scale variance of D, however, was about 50 % of the total variance, which gave an accuracy of less than 60 % for most of the cross-validation cases. Despite the noisy character of the observations, the analysis demonstrated that L can be used as secondary information to reduce the estimation variance of D.
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... COMMISSIONER, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property..., electrical, and elevators. (3) For purposes of this paragraph, a state or local code regulates an area if it...
NASA Astrophysics Data System (ADS)
Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan
2017-12-01
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).
C-semiring Frameworks for Minimum Spanning Tree Problems
NASA Astrophysics Data System (ADS)
Bistarelli, Stefano; Santini, Francesco
In this paper we define general algebraic frameworks for the Minimum Spanning Tree problem based on the structure of c-semirings. We propose general algorithms that can compute such trees by following different cost criteria, which must be all specific instantiation of c-semirings. Our algorithms are extensions of well-known procedures, as Prim or Kruskal, and show the expressivity of these algebraic structures. They can deal also with partially-ordered costs on the edges.
Mark, Quentin J
2014-01-01
Human height is a heritable trait that is known to be influenced by environmental factors and general standard of living. Individual and population stature is correlated with health, education and economic achievement. Strong sexual selection pressures for stature have been observed in multiple diverse populations, however; there is significant global variance in gender equality and prohibitions on female mate selection. This paper explores the contribution of general standard of living and gender inequality to the variance in global female population heights. Female population heights of 96 nations were culled from previously published sources and public access databases. Factor analysis with United Nations international data on education rates, life expectancy, incomes, maternal and childhood mortality rates, ratios of gender participation in education and politics, the Human Development Index (HDI) and the Gender Inequality Index (GII) was run. Results indicate that population heights vary more closely with gender inequality than with population health, income or education.
Streamflow record extension using power transformations and application to sediment transport
NASA Astrophysics Data System (ADS)
Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.
1999-01-01
To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.
Roemer, Lizabeth; Lee, Jonathan K.; Salters-Pedneault, Kristalyn; Erisman, Shannon M.; Orsillo, Susan M.; Mennin, Douglas S.
2013-01-01
Diminished levels of mindfulness (awareness and acceptance/nonjudgment) and difficulties in emotion regulation have both been proposed to play a role in symptoms of generalized anxiety disorder (GAD); the current studies investigated these relationships in a nonclinical and a clinical sample. In the first study, among a sample of 395 individuals at an urban commuter campus, we found that self reports of both emotion regulation difficulties and aspects of mindfulness accounted for unique variance in GAD symptom severity, above and beyond shared variance with depressive and anxious symptoms, as well as shared variance with one another. In the second study, we found that individuals diagnosed with clinically significant GAD (n = 16) reported significantly lower levels of mindfulness and significantly higher levels of difficulties in emotion regulation than individuals in a non-anxious control group (n = 16). Results are discussed in terms of directions for future research and potential implications for treatment development. PMID:19433145
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.
Böing-Messing, Florian; Mulder, Joris
2018-05-03
In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
On the estimation variance for the specific Euler-Poincaré characteristic of random networks.
Tscheschel, A; Stoyan, D
2003-07-01
The specific Euler number is an important topological characteristic in many applications. It is considered here for the case of random networks, which may appear in microscopy either as primary objects of investigation or as secondary objects describing in an approximate way other structures such as, for example, porous media. For random networks there is a simple and natural estimator of the specific Euler number. For its estimation variance, a simple Poisson approximation is given. It is based on the general exact formula for the estimation variance. In two examples of quite different nature and topology application of the formulas is demonstrated.
Palmprint Based Multidimensional Fuzzy Vault Scheme
Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding
2014-01-01
Fuzzy vault scheme (FVS) is one of the most popular biometric cryptosystems for biometric template protection. However, error correcting code (ECC) proposed in FVS is not appropriate to deal with real-valued biometric intraclass variances. In this paper, we propose a multidimensional fuzzy vault scheme (MDFVS) in which a general subspace error-tolerant mechanism is designed and embedded into FVS to handle intraclass variances. Palmprint is one of the most important biometrics; to protect palmprint templates; a palmprint based MDFVS implementation is also presented. Experimental results show that the proposed scheme not only can deal with intraclass variances effectively but also could maintain the accuracy and meanwhile enhance security. PMID:24892094
A Model for General Parenting Skill is Too Simple: Mediational Models Work Better.
ERIC Educational Resources Information Center
Patterson, G. R.; Yoerger, K.
A study was designed to determine whether mediational models of parenting patterns account for significantly more variance in academic achievement than more general models. Two general models and two mediational models were considered. The first model identified five skills: (1) discipline; (2) monitoring; (3) family problem solving; (4) positive…
Areal Control Using Generalized Least Squares As An Alternative to Stratification
Raymond L. Czaplewski
2001-01-01
Stratification for both variance reduction and areal control proliferates the number of strata, which causes small sample sizes in many strata. This might compromise statistical efficiency. Generalized least squares can, in principle, replace stratification for areal control.
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-09-19
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Comparative efficacy of storage bags, storability and damage potential of bruchid beetle.
Harish, G; Nataraja, M V; Ajay, B C; Holajjer, Prasanna; Savaliya, S D; Gedia, M V
2014-12-01
Groundnut during storage is attacked by number of stored grain pests and management of these insect pests particularly bruchid beetle, Caryedon serratus (Oliver) is of prime importance as they directly damage the pod and kernels. In this regard different storage bags that could be used and duration up to which we can store groundnut has been studied. Super grain bag recorded minimum number of eggs laid and less damage and minimum weight loss in pods and kernels in comparison to other storage bags. Analysis of variance for multiple regression models were found to be significant in all bags for variables viz, number of eggs laid, damage in pods and kernels, weight loss in pods and kernels throughout the season. Multiple comparison results showed that there was a high probability of eggs laid and pod damage in lino bag, fertilizer bag and gunny bag, whereas super grain bag was found to be more effective in managing the C. serratus owing to very low air circulation.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
A Sparse Matrix Approach for Simultaneous Quantification of Nystagmus and Saccade
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Stone, Lee; Boyle, Richard D.
2012-01-01
The vestibulo-ocular reflex (VOR) consists of two intermingled non-linear subsystems; namely, nystagmus and saccade. Typically, nystagmus is analysed using a single sufficiently long signal or a concatenation of them. Saccade information is not analysed and discarded due to insufficient data length to provide consistent and minimum variance estimates. This paper presents a novel sparse matrix approach to system identification of the VOR. It allows for the simultaneous estimation of both nystagmus and saccade signals. We show via simulation of the VOR that our technique provides consistent and unbiased estimates in the presence of output additive noise.
Statistical indicators of collective behavior and functional clusters in gene networks of yeast
NASA Astrophysics Data System (ADS)
Živković, J.; Tadić, B.; Wick, N.; Thurner, S.
2006-03-01
We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.
Gravity anomalies, compensation mechanisms, and the geodynamics of western Ishtar Terra, Venus
NASA Technical Reports Server (NTRS)
Grimm, Robert E.; Phillips, Roger J.
1991-01-01
Pioneer Venus line-of-sight orbital accelerations were utilized to calculate the geoid and vertical gravity anomalies for western Ishtar Terra on various planes of altitude z sub 0. The apparent depth of isostatic compensation at z sub 0 = 1400 km is 180 + or - 20 km based on the usual method of minimum variance in the isostatic anomaly. An attempt is made here to explain this observation, as well as the regional elevation, peripheral mountain belts, and inferred age of western Ishtar Terra, in terms of one or three broad geodynamic models.
Terluin, Berend; de Boer, Michiel R; de Vet, Henrica C W
2016-01-01
The network approach to psychopathology conceives mental disorders as sets of symptoms causally impacting on each other. The strengths of the connections between symptoms are key elements in the description of those symptom networks. Typically, the connections are analysed as linear associations (i.e., correlations or regression coefficients). However, there is insufficient awareness of the fact that differences in variance may account for differences in connection strength. Differences in variance frequently occur when subgroups are based on skewed data. An illustrative example is a study published in PLoS One (2013;8(3):e59559) that aimed to test the hypothesis that the development of psychopathology through "staging" was characterized by increasing connection strength between mental states. Three mental states (negative affect, positive affect, and paranoia) were studied in severity subgroups of a general population sample. The connection strength was found to increase with increasing severity in six of nine models. However, the method used (linear mixed modelling) is not suitable for skewed data. We reanalysed the data using inverse Gaussian generalized linear mixed modelling, a method suited for positively skewed data (such as symptoms in the general population). The distribution of positive affect was normal, but the distributions of negative affect and paranoia were heavily skewed. The variance of the skewed variables increased with increasing severity. Reanalysis of the data did not confirm increasing connection strength, except for one of nine models. Reanalysis of the data did not provide convincing evidence in support of staging as characterized by increasing connection strength between mental states. Network researchers should be aware that differences in connection strength between symptoms may be caused by differences in variances, in which case they should not be interpreted as differences in impact of one symptom on another symptom.
Automated real time constant-specificity surveillance for disease outbreaks.
Wieland, Shannon C; Brownstein, John S; Berger, Bonnie; Mandl, Kenneth D
2007-06-13
For real time surveillance, detection of abnormal disease patterns is based on a difference between patterns observed, and those predicted by models of historical data. The usefulness of outbreak detection strategies depends on their specificity; the false alarm rate affects the interpretation of alarms. We evaluate the specificity of five traditional models: autoregressive, Serfling, trimmed seasonal, wavelet-based, and generalized linear. We apply each to 12 years of emergency department visits for respiratory infection syndromes at a pediatric hospital, finding that the specificity of the five models was almost always a non-constant function of the day of the week, month, and year of the study (p < 0.05). We develop an outbreak detection method, called the expectation-variance model, based on generalized additive modeling to achieve a constant specificity by accounting for not only the expected number of visits, but also the variance of the number of visits. The expectation-variance model achieves constant specificity on all three time scales, as well as earlier detection and improved sensitivity compared to traditional methods in most circumstances. Modeling the variance of visit patterns enables real-time detection with known, constant specificity at all times. With constant specificity, public health practitioners can better interpret the alarms and better evaluate the cost-effectiveness of surveillance systems.
Wildhaber, Mark L.; Albers, Janice; Green, Nicholas; Moran, Edward H.
2017-01-01
We develop a fully-stochasticized, age-structured population model suitable for population viability analysis (PVA) of fish and demonstrate its use with the endangered pallid sturgeon (Scaphirhynchus albus) of the Lower Missouri River as an example. The model incorporates three levels of variance: parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level, temporal variance (uncertainty caused by random environmental fluctuations over time) applied at the time-step level, and implicit individual variance (uncertainty caused by differences between individuals) applied within the time-step level. We found that population dynamics were most sensitive to survival rates, particularly age-2+ survival, and to fecundity-at-length. The inclusion of variance (unpartitioned or partitioned), stocking, or both generally decreased the influence of individual parameters on population growth rate. The partitioning of variance into parameter and temporal components had a strong influence on the importance of individual parameters, uncertainty of model predictions, and quasiextinction risk (i.e., pallid sturgeon population size falling below 50 age-1+ individuals). Our findings show that appropriately applying variance in PVA is important when evaluating the relative importance of parameters, and reinforce the need for better and more precise estimates of crucial life-history parameters for pallid sturgeon.
NASA Astrophysics Data System (ADS)
J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul
2015-05-01
In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.
Claw length recommendations for dairy cow foot trimming
Archer, S. C.; Newsome, R.; Dibble, H.; Sturrock, C. J.; Chagunda, M. G. G.; Mason, C. S.; Huxley, J. N.
2015-01-01
The aim was to describe variation in length of the dorsal hoof wall in contact with the dermis for cows on a single farm, and hence, derive minimum appropriate claw lengths for routine foot trimming. The hind feet of 68 Holstein-Friesian dairy cows were collected post mortem, and the internal structures were visualised using x-ray µCT. The internal distance from the proximal limit of the wall horn to the distal tip of the dermis was measured from cross-sectional sagittal images. A constant was added to allow for a minimum sole thickness of 5 mm and an average wall thickness of 8 mm. Data were evaluated using descriptive statistics and two-level linear regression models with claw nested within cow. Based on 219 claws, the recommended dorsal wall length from the proximal limit of hoof horn was up to 90 mm for 96 per cent of claws, and the median value was 83 mm. Dorsal wall length increased by 1 mm per year of age, yet 85 per cent of the null model variance remained unexplained. Overtrimming can have severe consequences; the authors propose that the minimum recommended claw length stated in training materials for all Holstein-Friesian cows should be increased to 90 mm. PMID:26220848
Austin, Peter C
2010-04-22
Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.
Precision gravimetric survey at the conditions of urban agglomerations
NASA Astrophysics Data System (ADS)
Sokolova, Tatiana; Lygin, Ivan; Fadeev, Alexander
2014-05-01
Large cities growth and aging lead to the irreversible negative changes of underground. The study of these changes at the urban area mainly based on the shallow methods of Geophysics, which extensive usage restricted by technogenic noise. Among others, precision gravimetry is allocated as method with good resistance to the urban noises. The main the objects of urban gravimetric survey are the soil decompaction, leaded to the rocks strength violation and the karst formation. Their gravity effects are too small, therefore investigation requires the modern high-precision equipment and special methods of measurements. The Gravimetry division of Lomonosov Moscow State University examin of modern precision gravimeters Scintrex CG-5 Autograv since 2006. The main performance characteristics of over 20 precision gravimeters were examined in various operational modes. Stationary mode. Long-term gravimetric measurements were carried at a base station. It shows that records obtained differ by high-frequency and mid-frequency (period 5 - 12 hours) components. The high-frequency component, determined as a standard deviation of measurement, characterizes the level of the system sensitivity to external noise and varies for different devices from 2 to 5-7 μGals. Midrange component, which closely meet to the rest of nonlinearity gravimeter drifts, is partially compensated by the equipment. This factor is very important in the case of gravimetric monitoring or observations, when midrange anomalies are the target ones. For the examined gravimeters, amplitudes' deviations, associated with this parameter may reach 10 μGals. Various transportation modes - were performed by walking (softest mode), lift (vertical overload), vehicle (horizontal overloads), boat (vertical plus horizontal overloads) and helicopter. The survey quality was compared by the variance of the measurement results and internal convergence of series. The measurement results variance (from ±2 to ±4 μGals) and its internal convergence are independent on transportation mode. Actually, measurements differ just by the processing time and appropriate number of readings. Important, that the internal convergence is the individual attribute of particular device. For the investigated gravimeters it varies from ±3 up to ±8 μGals. Various stability of the gravimeters location base. The most stable basis (minimum microseisms) in this experiment was a concrete pedestal, the least stable - point on the 28th floor. There is no direct dependence of the measurement results variance at the external noise level. Moreover, the external dispersion between different gravimeters is minimal in the point of the highest microseisms. Conclusions. The quality of the modern high-precision gravimeters Scintrex CG-5 Autograv measurements is determined by stability of the particular device, its standard deviation value and the nonlinearity drift degree. Despite the fact, that mentioned parameters of the tested gravimeters, generally corresponded to the factory characters, for the surveys required accuracy ±2-5 μGals, the best gravimeters should be selected. Practical gravimetric survey with such accuracy allowed reliable determination of the position of technical communication boxes and underground walkway in the urban area, indicated by gravity minimums with the amplitudes from 6-8 μGals and 1 - 15 meters width. The holes' parameters, obtained as the result of interpretationare well aligned with priori data.
Comparing transformation methods for DNA microarray data
Thygesen, Helene H; Zwinderman, Aeilko H
2004-01-01
Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method. PMID:15202953
Comparing transformation methods for DNA microarray data.
Thygesen, Helene H; Zwinderman, Aeilko H
2004-06-17
When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.
Does space-time torsion determine the minimum mass of gravitating particles?
NASA Astrophysics Data System (ADS)
Böhmer, Christian G.; Burikham, Piyabut; Harko, Tiberiu; Lake, Matthew J.
2018-03-01
We derive upper and lower limits for the mass-radius ratio of spin-fluid spheres in Einstein-Cartan theory, with matter satisfying a linear barotropic equation of state, and in the presence of a cosmological constant. Adopting a spherically symmetric interior geometry, we obtain the generalized continuity and Tolman-Oppenheimer-Volkoff equations for a Weyssenhoff spin fluid in hydrostatic equilibrium, expressed in terms of the effective mass, density and pressure, all of which contain additional contributions from the spin. The generalized Buchdahl inequality, which remains valid at any point in the interior, is obtained, and general theoretical limits for the maximum and minimum mass-radius ratios are derived. As an application of our results we obtain gravitational red shift bounds for compact spin-fluid objects, which may (in principle) be used for observational tests of Einstein-Cartan theory in an astrophysical context. We also briefly consider applications of the torsion-induced minimum mass to the spin-generalized strong gravity model for baryons/mesons, and show that the existence of quantum spin imposes a lower bound for spinning particles, which almost exactly reproduces the electron mass.
Does space-time torsion determine the minimum mass of gravitating particles?
Böhmer, Christian G; Burikham, Piyabut; Harko, Tiberiu; Lake, Matthew J
2018-01-01
We derive upper and lower limits for the mass-radius ratio of spin-fluid spheres in Einstein-Cartan theory, with matter satisfying a linear barotropic equation of state, and in the presence of a cosmological constant. Adopting a spherically symmetric interior geometry, we obtain the generalized continuity and Tolman-Oppenheimer-Volkoff equations for a Weyssenhoff spin fluid in hydrostatic equilibrium, expressed in terms of the effective mass, density and pressure, all of which contain additional contributions from the spin. The generalized Buchdahl inequality, which remains valid at any point in the interior, is obtained, and general theoretical limits for the maximum and minimum mass-radius ratios are derived. As an application of our results we obtain gravitational red shift bounds for compact spin-fluid objects, which may (in principle) be used for observational tests of Einstein-Cartan theory in an astrophysical context. We also briefly consider applications of the torsion-induced minimum mass to the spin-generalized strong gravity model for baryons/mesons, and show that the existence of quantum spin imposes a lower bound for spinning particles, which almost exactly reproduces the electron mass.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Bailey, J A; Samek, D R; Keyes, M A; Hill, K G; Hicks, B M; McGue, M; Iacono, W G; Epstein, M; Catalano, R F; Haggerty, K P; Hawkins, J D
2014-05-01
This paper presents two replications of a heuristic model for measuring environment in studies of gene-environment interplay in the etiology of young adult problem behaviors. Data were drawn from two longitudinal, U.S. studies of the etiology of substance use and related behaviors: the Raising Healthy Children study (RHC; N=1040, 47% female) and the Minnesota Twin Family Study (MTFS; N=1512, 50% female). RHC included a Pacific Northwest, school-based, community sample. MTFS included twins identified from state birth records in Minnesota. Both studies included commensurate measures of general family environment and family substance-specific environments in adolescence (RHC ages 10-18; MTFS age 18), as well as young adult nicotine dependence, alcohol and illicit drug use disorders, HIV sexual risk behavior, and antisocial behavior (RHC ages 24, 25; MTFS age 25). Results from the two samples were highly consistent and largely supported the heuristic model proposed by Bailey et al. (2011). Adolescent general family environment, family smoking environment, and family drinking environment predicted shared variance in problem behaviors in young adulthood. Family smoking environment predicted unique variance in young adult nicotine dependence. Family drinking environment did not appear to predict unique variance in young adult alcohol use disorder. Organizing environmental predictors and outcomes into general and substance-specific measures provides a useful way forward in modeling complex environments and phenotypes. Results suggest that programs aimed at preventing young adult problem behaviors should target general family environment and family smoking and drinking environments in adolescence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... which shall be applied by all executive agencies. Additional criteria above these minimum standards shall be established by each executive agency, limiting its property to the minimum requirements necessary for the efficient functioning of the particular office concerned. This subpart does not apply to...
49 CFR 192.1003 - What do the regulations in this subpart cover?
Code of Federal Regulations, 2010 CFR
2010-10-01
... AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas...? General. This subpart prescribes minimum requirements for an IM program for any gas distribution pipeline...
Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Ibrahim, Amir M H
2015-12-01
Elevated level of late maturity α-amylase activity (LMAA) can result in low falling number scores, reduced grain quality, and downgrade of wheat (Triticum aestivum L.) class. A mating population was developed by crossing parents with different levels of LMAA. The F2 and F3 hybrids and their parents were evaluated for LMAA, and data were analyzed using the R software package 'qgtools' integrated with an additive-dominance genetic model and a mixed linear model approach. Simulated results showed high testing powers for additive and additive × environment variances, and comparatively low powers for dominance and dominance × environment variances. All variance components and their proportions to the phenotypic variance for the parents and hybrids were significant except for the dominance × environment variance. The estimated narrow-sense heritability and broad-sense heritability for LMAA were 14 and 54%, respectively. High significant negative additive effects for parents suggest that spring wheat cultivars 'Lancer' and 'Chester' can serve as good general combiners, and that 'Kinsman' and 'Seri-82' had negative specific combining ability in some hybrids despite of their own significant positive additive effects, suggesting they can be used as parents to reduce LMAA levels. Seri-82 showed very good general combining ability effect when used as a male parent, indicating the importance of reciprocal effects. High significant negative dominance effects and high-parent heterosis for hybrids demonstrated that the specific hybrid combinations; Chester × Kinsman, 'Lerma52' × Lancer, Lerma52 × 'LoSprout' and 'Janz' × Seri-82 could be generated to produce cultivars with significantly reduced LMAA level.
NASA Astrophysics Data System (ADS)
Chernikova, Dina; Axell, Kåre; Avdic, Senada; Pázsit, Imre; Nordlund, Anders; Allard, Stefan
2015-05-01
Two versions of the neutron-gamma variance to mean (Feynman-alpha method or Feynman-Y function) formula for either gamma detection only or total neutron-gamma detection, respectively, are derived and compared in this paper. The new formulas have particular importance for detectors of either gamma photons or detectors sensitive to both neutron and gamma radiation. If applied to a plastic or liquid scintillation detector, the total neutron-gamma detection Feynman-Y expression corresponds to a situation where no discrimination is made between neutrons and gamma particles. The gamma variance to mean formulas are useful when a detector of only gamma radiation is used or when working with a combined neutron-gamma detector at high count rates. The theoretical derivation is based on the Chapman-Kolmogorov equation with the inclusion of general reactions and corresponding intensities for neutrons and gammas, but with the inclusion of prompt reactions only. A one energy group approximation is considered. The comparison of the two different theories is made by using reaction intensities obtained in MCNPX simulations with a simplified geometry for two scintillation detectors and a 252Cf-source. In addition, the variance to mean ratios, neutron, gamma and total neutron-gamma are evaluated experimentally for a weak 252Cf neutron-gamma source, a 137Cs random gamma source and a 22Na correlated gamma source. Due to the focus being on the possibility of using neutron-gamma variance to mean theories for both reactor and safeguards applications, we limited the present study to the general analytical expressions for Feynman-alpha formulas.
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Boudghene Stambouli, Ahmed; Zendagui, Djawad; Bard, Pierre-Yves; Derras, Boumédiène
2017-07-01
Most modern seismic codes account for site effects using an amplification factor (AF) that modifies the rock acceleration response spectra in relation to a "site condition proxy," i.e., a parameter related to the velocity profile at the site under consideration. Therefore, for practical purposes, it is interesting to identify the site parameters that best control the frequency-dependent shape of the AF. The goal of the present study is to provide a quantitative assessment of the performance of various site condition proxies to predict the main AF features, including the often used short- and mid-period amplification factors, Fa and Fv, proposed by Borcherdt (in Earthq Spectra 10:617-653, 1994). In this context, the linear, viscoelastic responses of a set of 858 actual soil columns from Japan, the USA, and Europe are computed for a set of 14 real accelerograms with varying frequency contents. The correlation between the corresponding site-specific average amplification factors and several site proxies (considered alone or as multiple combinations) is analyzed using the generalized regression neural network (GRNN). The performance of each site proxy combination is assessed through the variance reduction with respect to the initial amplification factor variability of the 858 profiles. Both the whole period range and specific short- and mid-period ranges associated with the Borcherdt factors Fa and Fv are considered. The actual amplification factor of an arbitrary soil profile is found to be satisfactorily approximated with a limited number of site proxies (4-6). As the usual code practice implies a lower number of site proxies (generally one, sometimes two), a sensitivity analysis is conducted to identify the "best performing" site parameters. The best one is the overall velocity contrast between underlying bedrock and minimum velocity in the soil column. Because these are the most difficult and expensive parameters to measure, especially for thick deposits, other more convenient parameters are preferred, especially the couple ( {V_{{{s}30}} ,f0 } ) that leads to a variance reduction in at least 60%. From a code perspective, equations and plots are provided describing the dependence of the short- and mid-period amplification factors Fa and Fv on these two parameters. The robustness of the results is analyzed by performing a similar analysis for two alternative sets of velocity profiles, for which the bedrock velocity is constrained to have the same value for all velocity profiles, which is not the case in the original set.[Figure not available: see fulltext.
NASA Technical Reports Server (NTRS)
Vasquez, Bernard J.; Farrugia, Charles J.; Markovskii, Sergei A.; Hollweg, Joseph V.; Richardson, Ian G.; Ogilvie, Keith W.; Lepping, Ronald P.; Lin, Robert P.; Larson, Davin; White, Nicholas E. (Technical Monitor)
2001-01-01
A solar ejection passed the Wind spacecraft between December 23 and 26, 1996. On closer examination, we find a sequence of ejecta material, as identified by abnormally low proton temperatures, separated by plasmas with typical solar wind temperatures at 1 AU. Large and abrupt changes in field and plasma properties occurred near the separation boundaries of these regions. At the one boundary we examine here, a series of directional discontinuities was observed. We argue that Alfvenic fluctuations in the immediate vicinity of these discontinuities distort minimum variance normals, introducing uncertainty into the identification of the discontinuities as either rotational or tangential. Carrying out a series of tests on plasma and field data including minimum variance, velocity and magnetic field correlations, and jump conditions, we conclude that the discontinuities are tangential. Furthermore, we find waves superposed on these tangential discontinuities (TDs). The presence of discontinuities allows the existence of both surface waves and ducted body waves. Both probably form in the solar atmosphere where many transverse nonuniformities exist and where theoretically they have been expected. We add to prior speculation that waves on discontinuities may in fact be a common occurrence. In the solar wind, these waves can attain large amplitudes and low frequencies. We argue that such waves can generate dynamical changes at TDs through advection or forced reconnection. The dynamics might so extensively alter the internal structure that the discontinuity would no longer be identified as tangential. Such processes could help explain why the occurrence frequency of TDs observed throughout the solar wind falls off with increasing heliocentric distance. The presence of waves may also alter the nature of the interactions of TDs with the Earth's bow shock in so-called hot flow anomalies.
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
NASA Astrophysics Data System (ADS)
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
Modeling rainfall-runoff relationship using multivariate GARCH model
NASA Astrophysics Data System (ADS)
Modarres, R.; Ouarda, T. B. M. J.
2013-08-01
The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.
Robust versus consistent variance estimators in marginal structural Cox models.
Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris
2018-06-11
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.
24 CFR 35.155 - Minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Minimum requirements. 35.155 Section 35.155 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES General Lead-Based Paint...
24 CFR 35.155 - Minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Minimum requirements. 35.155 Section 35.155 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES General Lead-Based Paint...
24 CFR 35.155 - Minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Minimum requirements. 35.155 Section 35.155 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES General Lead-Based Paint...
24 CFR 35.155 - Minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Minimum requirements. 35.155 Section 35.155 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES General Lead-Based Paint...
24 CFR 35.155 - Minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Minimum requirements. 35.155 Section 35.155 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development LEAD-BASED PAINT POISONING PREVENTION IN CERTAIN RESIDENTIAL STRUCTURES General Lead-Based Paint...
42 CFR 84.62 - Component parts; minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Component parts; minimum requirements. 84.62 Section 84.62 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General...
42 CFR 84.62 - Component parts; minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Component parts; minimum requirements. 84.62 Section 84.62 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General...
42 CFR 84.62 - Component parts; minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Component parts; minimum requirements. 84.62 Section 84.62 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General...
42 CFR 84.62 - Component parts; minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Component parts; minimum requirements. 84.62 Section 84.62 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General...
42 CFR 84.62 - Component parts; minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Component parts; minimum requirements. 84.62 Section 84.62 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES General...
29 CFR 780.501 - Statutory provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Agricultural Employees in Processing Shade-Grown Tobacco; Exemption From Minimum Wage and Overtime Pay... Labor Standards Act exempts from the minimum wage requirements of section 6 of the Act and from the... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL...
Antecedents of organizational citizenship behavior among Iranian nurses: a multicenter study.
Taghinezhad, Fakhredin; Safavi, Mahboobe; Raiesifar, Afsaneh; Yahyavi, Sayed Hossein
2015-10-08
Organizational citizenship behavior (OCB) improves efficiency and employees' participation and generally provides a good ambiance. This study was conducted to determine the role of job satisfaction (JS), organizational commitment (OC) and procedural justice (PJ) in explaining OCB among nurses working in fifteen educational-treatment centers in Tehran-Iran, to provide guidelines for health care managers' further understanding of how to encourage citizenship behavior among nurses. In this multi-center descriptive-correlational study 373 nurses were evaluated through a Multi-stage cluster sampling method after obtaining approval from the Ethics Committee of Islamic Azad University, Tehran Medical Branch and Tehran University of Medical Sciences Research Deputy. Nurses who signed the informed consent and holding a bachelor or master degree, having a minimum one year of job experience and not having organizational management position during the questionnaire distribution were included in the study. In order to collect data, Demographic questionnaire, Podsakoff et al. (Leadersh Q 1(2):107-142, 1990) OCB questionnaire, OC questionnaire, Aelterman et al. (Educ Stud 33(3):285-297, 2007) JS questionnaire and PJ questionnaire were used. These questionnaires were translated into Persian and content validity was confirmed by an expert group; their reliability was calculated by the internal consistency Cronbach alpha coefficient and it was satisfied. Data were analyzed by descriptive statistics, Comparative mean tests, correlation coefficient and multiple-regression in the SPSS software version 11. The general mean and all five aspects of OCB that ranked higher than 3 were evaluated in a "quite desired" state. The mean for perceived procedural justice, the general mean for JS and the mean of general grade for OC from the nurses' was in "quite desired" state. Finding from multiple regression indicated that OC and PJ exhibit about 19 % of OCB variance totally which is statistically significant (P < 0.01). JS had no significant impact on explaining OCB. OC was the strongest predictor of nurses' OCB followed by perceived procedural justice. So, improving these factors can initiate better citizenship behavior among nurses.
Zheng, Hanrong; Fang, Zujie; Wang, Zhaoyong; Lu, Bin; Cao, Yulong; Ye, Qing; Qu, Ronghui; Cai, Haiwen
2018-01-31
It is a basic task in Brillouin distributed fiber sensors to extract the peak frequency of the scattering spectrum, since the peak frequency shift gives information on the fiber temperature and strain changes. Because of high-level noise, quadratic fitting is often used in the data processing. Formulas of the dependence of the minimum detectable Brillouin frequency shift (BFS) on the signal-to-noise ratio (SNR) and frequency step have been presented in publications, but in different expressions. A detailed deduction of new formulas of BFS variance and its average is given in this paper, showing especially their dependences on the data range used in fitting, including its length and its center respective to the real spectral peak. The theoretical analyses are experimentally verified. It is shown that the center of the data range has a direct impact on the accuracy of the extracted BFS. We propose and demonstrate an iterative fitting method to mitigate such effects and improve the accuracy of BFS measurement. The different expressions of BFS variances presented in previous papers are explained and discussed.
Multi-objective Optimization of Solar Irradiance and Variance at Pertinent Inclination Angles
NASA Astrophysics Data System (ADS)
Jain, Dhanesh; Lalwani, Mahendra
2018-05-01
The performance of photovoltaic panel gets highly affected bychange in atmospheric conditions and angle of inclination. This article evaluates the optimum tilt angle and orientation angle (surface azimuth angle) for solar photovoltaic array in order to get maximum solar irradiance and to reduce variance of radiation at different sets or subsets of time periods. Non-linear regression and adaptive neural fuzzy interference system (ANFIS) methods are used for predicting the solar radiation. The results of ANFIS are more accurate in comparison to non-linear regression. These results are further used for evaluating the correlation and applied for estimating the optimum combination of tilt angle and orientation angle with the help of general algebraic modelling system and multi-objective genetic algorithm. The hourly average solar irradiation is calculated at different combinations of tilt angle and orientation angle with the help of horizontal surface radiation data of Jodhpur (Rajasthan, India). The hourly average solar irradiance is calculated for three cases: zero variance, with actual variance and with double variance at different time scenarios. It is concluded that monthly collected solar radiation produces better result as compared to bimonthly, seasonally, half-yearly and yearly collected solar radiation. The profit obtained for monthly varying angle has 4.6% more with zero variance and 3.8% more with actual variance, than the annually fixed angle.
Spectral decomposition of internal gravity wave sea surface height in global models
NASA Astrophysics Data System (ADS)
Savage, Anna C.; Arbic, Brian K.; Alford, Matthew H.; Ansong, Joseph K.; Farrar, J. Thomas; Menemenlis, Dimitris; O'Rourke, Amanda K.; Richman, James G.; Shriver, Jay F.; Voet, Gunnar; Wallcraft, Alan J.; Zamudio, Luis
2017-10-01
Two global ocean models ranging in horizontal resolution from 1/12° to 1/48° are used to study the space and time scales of sea surface height (SSH) signals associated with internal gravity waves (IGWs). Frequency-horizontal wavenumber SSH spectral densities are computed over seven regions of the world ocean from two simulations of the HYbrid Coordinate Ocean Model (HYCOM) and three simulations of the Massachusetts Institute of Technology general circulation model (MITgcm). High wavenumber, high-frequency SSH variance follows the predicted IGW linear dispersion curves. The realism of high-frequency motions (>0.87 cpd) in the models is tested through comparison of the frequency spectral density of dynamic height variance computed from the highest-resolution runs of each model (1/25° HYCOM and 1/48° MITgcm) with dynamic height variance frequency spectral density computed from nine in situ profiling instruments. These high-frequency motions are of particular interest because of their contributions to the small-scale SSH variability that will be observed on a global scale in the upcoming Surface Water and Ocean Topography (SWOT) satellite altimetry mission. The variance at supertidal frequencies can be comparable to the tidal and low-frequency variance for high wavenumbers (length scales smaller than ˜50 km), especially in the higher-resolution simulations. In the highest-resolution simulations, the high-frequency variance can be greater than the low-frequency variance at these scales.
Are stock prices too volatile to be justified by the dividend discount model?
NASA Astrophysics Data System (ADS)
Akdeniz, Levent; Salih, Aslıhan Altay; Ok, Süleyman Tuluğ
2007-03-01
This study investigates excess stock price volatility using the variance bound framework of LeRoy and Porter [The present-value relation: tests based on implied variance bounds, Econometrica 49 (1981) 555-574] and of Shiller [Do stock prices move too much to be justified by subsequent changes in dividends? Am. Econ. Rev. 71 (1981) 421-436.]. The conditional variance bound relationship is examined using cross-sectional data simulated from the general equilibrium asset pricing model of Brock [Asset prices in a production economy, in: J.J. McCall (Ed.), The Economics of Information and Uncertainty, University of Chicago Press, Chicago (for N.B.E.R.), 1982]. Results show that the conditional variance bounds hold, hence, our hypothesis of the validity of the dividend discount model cannot be rejected. Moreover, in our setting, markets are efficient and stock prices are neither affected by herd psychology nor by the outcome of noise trading by naive investors; thus, we are able to control for market efficiency. Consequently, we show that one cannot infer any conclusions about market efficiency from the unconditional variance bounds tests.
Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher
2010-11-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.
Fuchs, Lynn S.; Geary, David C.; Compton, Donald L.; Fuchs, Douglas; Hamlett, Carol L.; Seethaler, Pamela M.; Bryant, Joan D.; Schatschneider, Christopher
2010-01-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (n=280; 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations (PCs), and word problems (WPs) in fall and then reassessed on PCs and WPs in spring. Development was indexed via latent change scores, and the interplay between numerical and domain-general abilities was analyzed via multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of PC and WP development. Yet, for PC development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for WP development, the set of domain- general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive. PMID:20822213
Memory is Not Enough: The Neurobiological Substrates of Dynamic Cognitive Reserve.
Serra, Laura; Bruschini, Michela; Di Domenico, Carlotta; Gabrielli, Giulia Bechi; Marra, Camillo; Caltagirone, Carlo; Cercignani, Mara; Bozzali, Marco
2017-01-01
Changes in the residual memory variance are considered as a dynamic aspect of cognitive reserve (d-CR). We aimed to investigate for the first time the neural substrate associated with changes in the residual memory variance overtime in patients with amnestic mild cognitive impairment (aMCI). Thirty-four aMCI patients followed-up for 36 months and 48 healthy elderly individuals (HE) were recruited. All participants underwent 3T MRI, collecting T1-weighted images for voxel-based morphometry (VBM). They underwent an extensive neuropsychological battery, including six episodic memory tests. In patients and controls, factor analyses were used on the episodic memory scores to obtain a composite memory score (C-MS). Partial Least Square analyses were used to decompose the variance of C-MS in latent variables (LT scores), accounting for demographic variables and for the general cognitive efficiency level; linear regressions were applied on LT scores, striping off any contribution of general cognitive abilities, to obtain the residual value of memory variance, considered as an index of d-CR. LT scores and d-CR were used in discriminant analysis, in patients only. Finally, LT scores and d-CR were used as variable of interest in VBM analysis. The d-CR score was not able to correctly classify patients. In both aMCI patients and HE, LT1st and d-CR scores showed correlations with grey matter volumes in common and in specific brain areas. Using CR measures limited to assess memory function is likely less sensitive to detect the cognitive decline and predict the evolution of Alzheimer's disease. In conclusion, d-CR needs a measure of general cognition to identify conversion to Alzheimer's disease efficiently.
Homeostatic maintenance via degradation and repair of elastic fibers under tension
NASA Astrophysics Data System (ADS)
Alves, Calebe; Araújo, Ascanio D.; Oliveira, Cláudio L. N.; Imsirovic, Jasmin; Bartolák-Suki, Erzsébet; Andrade, José S.; Suki, Béla
2016-06-01
Cellular maintenance of the extracellular matrix requires an effective regulation that balances enzymatic degradation with the repair of collagen fibrils and fibers. Here, we investigate the long-term maintenance of elastic fibers under tension combined with diffusion of general degradative and regenerative particles associated with digestion and repair processes. Computational results show that homeostatic fiber stiffness can be achieved by assuming that cells periodically probe fiber stiffness to adjust the production and release of degradative and regenerative particles. However, this mechanism is unable to maintain a homogeneous fiber. To account for axial homogeneity, we introduce a robust control mechanism that is locally governed by how the binding affinity of particles is modulated by mechanical forces applied to the ends of the fiber. This model predicts diameter variations along the fiber that are in agreement with the axial distribution of collagen fibril diameters obtained from scanning electron microscopic images of normal rat thoracic aorta. The model predictions match the experiments only when the applied force on the fiber is in the range where the variance of local stiffness along the fiber takes a minimum value. Our model thus predicts that the biophysical properties of the fibers play an important role in the long-term regulatory maintenance of these fibers.
Range and azimuth resolution enhancement for 94 GHz real-beam radar
NASA Astrophysics Data System (ADS)
Liu, Guoqing; Yang, Ken; Sykora, Brian; Salha, Imad
2008-04-01
In this paper, two-dimensional (2D) (range and azimuth) resolution enhancement is investigated for millimeter wave (mmW) real-beam radar (RBR) with linear or non-linear antenna scan in the azimuth dimension. We design a new architecture of super resolution processing, in which a dual-mode approach is used for defining region of interest for 2D resolution enhancement and a combined approach is deployed for obtaining accurate location and amplitude estimations of targets within the region of interest. To achieve 2D resolution enhancement, we first adopt the Capon Beamformer (CB) approach (also known as the minimum variance method (MVM)) to enhance range resolution. A generalized CB (GCB) approach is then applied to azimuth dimension for azimuth resolution enhancement. The GCB approach does not rely on whether the azimuth sampling is even or not and thus can be used in both linear and non-linear antenna scanning modes. The effectiveness of the resolution enhancement is demonstrated by using both simulation and test data. The results of using a 94 GHz real-beam frequency modulation continuous wave (FMCW) radar data show that the overall image quality is significantly improved per visual evaluation and comparison with respect to the original real-beam radar image.
Castillo, Rodrigo; Nieto, Raquel; Drumond, Anita; Gimeno, Luis
2014-01-01
The Lagrangian FLEXPART model has been used during the last decade to detect moisture sources that affect the climate in different regions of the world. While most of these studies provided a climatological perspective on the atmospheric branch of the hydrological cycle in terms of precipitation, none assessed the minimum temporal domain for which the climatological approach is valid. The methodology identifies the contribution of humidity to the moisture budget in a region by computing the changes in specific humidity along backward (or forward) trajectories of air masses over a period of ten days beforehand (afterwards), thereby allowing the calculation of monthly, seasonal and annual averages. The current study calculates as an example the climatological seasonal mean and variance of the net precipitation for regions in which precipitation exceeds evaporation (E-P<0) for the North Atlantic moisture source region using different time periods, for winter and summer from 1980 to 2000. The results show that net evaporation (E-P>0) can be discounted after when the integration of E-P is done without affecting the general net precipitation patterns when it is discounted in a monthly or longer time scale. PMID:24893002
12 CFR 324.10 - Minimum capital requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Minimum capital requirements. 324.10 Section 324.10 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY CAPITAL ADEQUACY OF FDIC-SUPERVISED INSTITUTIONS Capital Ratio Requirements and Buffers § 324.10...
40 CFR 262.104 - What are the minimum performance criteria?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE University Laboratories... criteria? The Minimum Performance Criteria that each University must meet in managing its Laboratory Waste are: (a) Each University must label all laboratory waste with the general hazard class and either the...
40 CFR 262.104 - What are the minimum performance criteria?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE University Laboratories... criteria? The Minimum Performance Criteria that each University must meet in managing its Laboratory Waste are: (a) Each University must label all laboratory waste with the general hazard class and either the...
40 CFR 262.104 - What are the minimum performance criteria?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE University Laboratories... criteria? The Minimum Performance Criteria that each University must meet in managing its Laboratory Waste are: (a) Each University must label all laboratory waste with the general hazard class and either the...
Setting Standards for Minimum Competency Tests.
ERIC Educational Resources Information Center
Mehrens, William A.
Some general questions about minimum competency tests are discussed, and various methods of setting standards are reviewed with major attention devoted to those methods used for dichotomizing a continuum. Methods reviewed under the heading of Absolute Judgments of Test Content include Nedelsky's, Angoff's, Ebel's, and Jaeger's. These methods are…
77 FR 60625 - Minimum Internal Control Standards for Class II Gaming
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
...-37 Minimum Internal Control Standards for Class II Gaming AGENCY: National Indian Gaming Commission... Internal Control Standards that were published on September 21, 2012. DATES: The effective date [email protected] . FOR FURTHER INFORMATION CONTACT: Jennifer Ward, Attorney, NIGC Office of General Counsel, at...
Minimum Knowledge and Skills Objectives for Alcohol and Other Drug Abuse Teaching.
ERIC Educational Resources Information Center
American Psychiatric Association, Hartford, CT.
This publication brings together statements concerning the minimum knowledge and skills objectives in alcohol and other drug abuse determined by the professional organizations of six medical specialties: pediatrics; emergency medicine; obstetrics and gynecology; psychiatry; general internal medicine; and family medicine for undergraduate,…
MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.
ERIC Educational Resources Information Center
Pennsylvania State Dept. of Public Instruction, Harrisburg.
MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…
40 CFR 600.010 - Vehicle test requirements and minimum data requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Vehicle test requirements and minimum data requirements. 600.010 Section 600.010 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES General...
40 CFR 600.010 - Vehicle test requirements and minimum data requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Vehicle test requirements and minimum data requirements. 600.010 Section 600.010 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES General...
40 CFR 600.010 - Vehicle test requirements and minimum data requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Vehicle test requirements and minimum data requirements. 600.010 Section 600.010 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES General...
Finger gnosis predicts a unique but small part of variance in initial arithmetic performance.
Wasner, Mirjam; Nuerk, Hans-Christoph; Martignon, Laura; Roesch, Stephanie; Moeller, Korbinian
2016-06-01
Recent studies indicated that finger gnosis (i.e., the ability to perceive and differentiate one's own fingers) is associated reliably with basic numerical competencies. In this study, we aimed at examining whether finger gnosis is also a unique predictor for initial arithmetic competencies at the beginning of first grade-and thus before formal math instruction starts. Therefore, we controlled for influences of domain-specific numerical precursor competencies, domain-general cognitive ability, and natural variables such as gender and age. Results from 321 German first-graders revealed that finger gnosis indeed predicted a unique and relevant but nevertheless only small part of the variance in initial arithmetic performance (∼1%-2%) as compared with influences of general cognitive ability and numerical precursor competencies. Taken together, these results substantiated the notion of a unique association between finger gnosis and arithmetic and further corroborate the theoretical idea of finger-based representations contributing to numerical cognition. However, the only small part of variance explained by finger gnosis seems to limit its relevance for diagnostic purposes. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Deng, Yuewen; Liu, Xiao; Zhang, Guofan; Wu, Fucun
2010-11-01
We conducted a complete diallel cross among three geographically isolated populations of Pacific abalone Haliotis discus hannai Ino to determine the heterosis and the combining ability of growth traits at the spat stage. The three populations were collected from Qingdao (Q) and Dalian (D) in China, and Miyagi (M) in Japan. We measured the shell length, shell width, and total weight. The magnitude of the general combining ability (GCA) variance was more pronounced than the specific combining ability (SCA) variance, which is evidenced by both the ratio of the genetic component in total variation and the GCA/SCA values. The component variances of GCA and SCA were significant for all three traits ( P<0.05), indicating the importance of additive and non-additive genetic effects in determining the expression of these traits. The reciprocal maternal effects (RE) were also significant for these traits ( P<0.05). Our results suggest that population D was the best general combiner in breeding programs to improve growth traits. The DM cross had the highest heterosis values for all three traits.
Genetic control of residual variance of yearling weight in Nellore beef cattle.
Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R
2017-04-01
There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (<0.007). Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting its presence beyond the scale effect. The DHGLM showed higher predictive ability of EBV for residual variance and therefore should be preferred over the two-step approach.
A general factor of personality from multitrait-multimethod data and cross-national twins.
Rushton, J Philippe; Bons, Trudy Ann; Ando, Juko; Hur, Yoon-Mi; Irwing, Paul; Vernon, Philip A; Petrides, K V; Barbaranelli, Claudio
2009-08-01
In three studies, a General Factor of Personality (GFP) was found to occupy the apex of the hierarchical structure. In Study 1, a GFP emerged independent of method variance and accounted for 54% of the reliable variance in a multitrait-multimethod assessment of 391 Italian high school students that used self-, teacher-, and parent-ratings on the Big Five Questionnaire - Children. In Study 2, a GFP was found in the seven dimensions of Cloninger's Temperament and Character Inventory as well as the Big Five of the NEO PI-R, with the GFPtci correlating r = .72 with the GFPneo. These results indicate that the GFP is practically the same in both test batteries, and its existence does not depend on being extracted using the Big Five model. The GFP accounted for 22% of the total variance in these trait measures, which were assessed in 651 pairs of 14- to 30-year-old Japanese twins. In Study 3, a GFP accounted for 32% of the total variance in nine scales derived from the NEO PI-R, the Humor Styles Questionnaire, and the Trait Emotional Intelligence Questionnaire assessed in 386 pairs of 18- to 74-year-old Canadian and U.S. twins. The GFP was found to be 50% heritable with high scores indicating openness, conscientiousness, sociability, agreeableness, emotional stability, good humor and emotional intelligence. The possible evolutionary origins of the GFP are discussed.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Factor Analysis by Generalized Least Squares.
ERIC Educational Resources Information Center
Joreskog, Karl G.; Goldberger, Arthur S.
Aitkin's generalized least squares (GLS) principle, with the inverse of the observed variance-covariance matrix as a weight matrix, is applied to estimate the factor analysis model in the exploratory (unrestricted) case. It is shown that the GLS estimates are scale free and asymptotically efficient. The estimates are computed by a rapidly…
Sleep and nutritional deprivation and performance of house officers.
Hawkins, M R; Vichick, D A; Silsby, H D; Kruzich, D J; Butler, R
1985-07-01
A study was conducted by the authors to compare cognitive functioning in acutely and chronically sleep-deprived house officers. A multivariate analysis of variance revealed significant deficits in primary mental tasks involving basic rote memory, language, and numeric skills as well as in tasks requiring high-order cognitive functioning and traditional intellective abilities. These deficits existed only for the acutely sleep-deprived group. The finding of deficits in individuals who reported five hours or less of sleep in a 24-hour period suggests that the minimum standard of four hours that has been considered by some to be adequate for satisfactory performance may be insufficient for more complex cognitive functioning.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
A Multipath Mitigation Algorithm for vehicle with Smart Antenna
NASA Astrophysics Data System (ADS)
Ji, Jing; Zhang, Jiantong; Chen, Wei; Su, Deliang
2018-01-01
In this paper, the antenna array adaptive method is used to eliminate the multipath interference in the environment of GPS L1 frequency. Combined with the power inversion (PI) algorithm and the minimum variance no distortion response (MVDR) algorithm, the anti-Simulation and verification of the antenna array, and the program into the FPGA, the actual test on the CBD road, the theoretical analysis of the LCMV criteria and PI and MVDR algorithm principles and characteristics of MVDR algorithm to verify anti-multipath interference performance is better than PI algorithm, The satellite navigation in the field of vehicle engineering practice has some guidance and reference.
NASA Technical Reports Server (NTRS)
Grappin, R.; Velli, M.
1995-01-01
The solar wind is not an isotropic medium; two symmetry axis are provided, first the radial direction (because the mean wind is radial) and second the spiral direction of the mean magnetic field, which depends on heliocentric distance. Observations show very different anisotropy directions, depending on the frequency waveband; while the large-scale velocity fluctuations are essentially radial, the smaller scale magnetic field fluctuations are mostly perpendicular to the mean field direction, which is not the expected linear (WkB) result. We attempt to explain how these properties are related, with the help of numerical simulations.
38 CFR 3.12a - Minimum active-duty service requirement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Minimum active-duty service requirement. 3.12a Section 3.12a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.12a...
38 CFR 3.12a - Minimum active-duty service requirement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Minimum active-duty service requirement. 3.12a Section 3.12a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.12a...
38 CFR 3.12a - Minimum active-duty service requirement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Minimum active-duty service requirement. 3.12a Section 3.12a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.12a...
38 CFR 3.12a - Minimum active-duty service requirement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Minimum active-duty service requirement. 3.12a Section 3.12a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.12a...
38 CFR 3.12a - Minimum active-duty service requirement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Minimum active-duty service requirement. 3.12a Section 3.12a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.12a...
Proposal for Support of Miami Inner City Marine Summer Intern Program, Dade County.
1987-12-21
employer NUMBER OF POSITIONS ONE MINIMUM AGE 16 SPECIAL REQUIREMENTS * General Science * Basic knowledge of library orncedures, an interest in library ... science in helpful * Minimum Grade Point Average 3.0 DRESS REQUIREMENTS Discuss with employer JOB DESCRIPTION p. * Catalogs and files new sets of
14 CFR 91.119 - Minimum safe altitudes: General.
Code of Federal Regulations, 2014 CFR
2014-01-01
... than 500 feet to any person, vessel, vehicle, or structure. (d) Helicopters, powered parachutes, and... surface— (1) A helicopter may be operated at less than the minimums prescribed in paragraph (b) or (c) of this section, provided each person operating the helicopter complies with any routes or altitudes...
14 CFR 91.119 - Minimum safe altitudes: General.
Code of Federal Regulations, 2013 CFR
2013-01-01
... than 500 feet to any person, vessel, vehicle, or structure. (d) Helicopters, powered parachutes, and... surface— (1) A helicopter may be operated at less than the minimums prescribed in paragraph (b) or (c) of this section, provided each person operating the helicopter complies with any routes or altitudes...
14 CFR 91.119 - Minimum safe altitudes: General.
Code of Federal Regulations, 2012 CFR
2012-01-01
... than 500 feet to any person, vessel, vehicle, or structure. (d) Helicopters, powered parachutes, and... surface— (1) A helicopter may be operated at less than the minimums prescribed in paragraph (b) or (c) of this section, provided each person operating the helicopter complies with any routes or altitudes...
14 CFR 93.307 - Minimum flight altitudes.
Code of Federal Regulations, 2013 CFR
2013-01-01
... feet MSL. (b) Minimum corridor altitudes—(1) Commercial air tours—(i) Zuni Point Corridors. 7,500 feet MSL. (ii) Dragon Corridor. 7,500 feet MSL. (2) Transient and general aviation operations—(i) Zuni Point Corridor. 10,500 feet MSL. (ii) Dragon Corridor. 10,500 feet MSL. (iii) Tuckup Corridor. 10,500...
14 CFR 93.307 - Minimum flight altitudes.
Code of Federal Regulations, 2014 CFR
2014-01-01
... feet MSL. (b) Minimum corridor altitudes—(1) Commercial air tours—(i) Zuni Point Corridors. 7,500 feet MSL. (ii) Dragon Corridor. 7,500 feet MSL. (2) Transient and general aviation operations—(i) Zuni Point Corridor. 10,500 feet MSL. (ii) Dragon Corridor. 10,500 feet MSL. (iii) Tuckup Corridor. 10,500...
Minimum Essential Requirements and Standards in Medical Education.
ERIC Educational Resources Information Center
Wojtczak, Andrzej; Schwarz, M. Roy
2000-01-01
Reviews the definition of standards in general, and proposes a definition of standards and global minimum essential requirements for use in medical education. Aims to serve as a tool for the improvement of quality and international comparisons of basic medical programs. Explains the IIME (Institute for International Medical Education) project…
78 FR 7314 - Shared Responsibility Payment for Not Maintaining Minimum Essential Coverage
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-01
... accounting firm in accordance with generally accepted accounting principles the report of which is made... affordable coverage if the individual's required contribution (determined on an annual basis) for minimum... portion of the required contribution made through a salary reduction arrangement and excluded from gross...
The variance modulation associated with the vestibular evoked myogenic potential.
Lütkenhöner, Bernd; Rudack, Claudia; Basel, Türker
2011-07-01
Model considerations suggest that the sound-induced inhibition underlying the vestibular evoked myogenic potential (VEMP) briefly reduces the variance of the electromyogram (EMG) from which the VEMP is derived. Although more difficult to investigate, this inhibitory modulation of the variance promises to be a specific measure of the inhibition, in that respect being superior to the VEMP itself. This study aimed to verify the theoretical predictions. Archived data from 672 clinical VEMP investigations, comprising about 300,000 EMG records altogether, were pooled. Both the complete data pool and subsets of data representing VEMPs of varying degrees of distinctness were analyzed. The data were generally normalized so that the EMG had variance one. Regarding VEMP deflection p13, the data confirm the theoretical predictions. At the latency of deflection n23, however, an additional excitatory component, showing a maximal effect around 30 ms, appears to contribute. Studying the variance modulation may help to identify and characterize different components of the VEMP. In particular, it appears to be possible to distinguish between inhibition and excitation. The variance modulation provides information not being available in the VEMP itself. Thus, studying this measure may significantly contribute to our understanding of the VEMP phenomenon. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Technical and biological variance structure in mRNA-Seq data: life in the real world
2012-01-01
Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017
Rössler, Wulf; Hengartner, Michael P; Ajdacic-Gross, Vladeta; Haker, Helene; Angst, Jules
2013-10-01
Our aim was to deconstruct the variance underlying the expression of sub-clinical psychosis symptoms into portions associated with latent time-dependent states and time-invariant traits. We analyzed data of 335 subjects from the general population of Zurich, Switzerland, who had been repeatedly measured between 1979 (age 20/21) and 2008 (age 49/50). We applied two measures of sub-clinical psychosis derived from the SCL-90-R, namely schizotypal signs (STS) and schizophrenia nuclear symptoms (SNS). Variance was decomposed with latent state-trait analysis and associations with covariates were examined with generalized linear models. At ages 19/20 and 49/50, the latent states underlying STS accounted for 48% and 51% of variance, whereas for SNS those estimates were 62% and 50%. Between those age classes, however, expression of sub-clinical psychosis was strongly associated with stable traits (75% and 89% of total variance in STS and SNS, respectively, at age 27/28). Latent states underlying variance in STS and SNS were particularly related to partnership problems over almost the entire observation period. STS was additionally related to employment problems, whereas drug-use was a strong predictor of states underlying both syndromes at age 19/20. The latent trait underlying expression of STS and SNS was particularly related to low sense of mastery and self-esteem and to high depressiveness. Although most psychosis symptoms are transient and episodic in nature, the variability in their expression is predominantly caused by stable traits. Those time-invariant and rather consistent effects are particularly influential around age 30, whereas the occasion-specific states appear to be particularly influential at ages 20 and 50. © 2013.
Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C
2017-04-01
The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Using variance structure to quantify responses to perturbation in fish catches
Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.
2017-01-01
We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.
Social capital and health-purely a question of context?
Giordano, Giuseppe Nicola; Ohlsson, Henrik; Lindström, Martin
2011-07-01
Debate still surrounds which level of analysis (individual vs. contextual) is most appropriate to investigate the effects of social capital on health. Applying multilevel ecometric analyses to British Household Panel Survey data, we estimated fixed and random effects between five individual-, household- and small area-level social capital indicators and general health. We further compared the variance in health attributable to each level using intraclass correlations. Our results demonstrate that association between social capital and health depends on indicator type and level investigated, with one quarter of total individual-level health variance found at the household level. However, individual-level social capital variables and other health determinants appear to influence contextual-level variance the most. Copyright © 2011 Elsevier Ltd. All rights reserved.
Linkage disequilibrium and association mapping.
Weir, B S
2008-01-01
Linkage disequilibrium refers to the association between alleles at different loci. The standard definition applies to two alleles in the same gamete, and it can be regarded as the covariance of indicator variables for the states of those two alleles. The corresponding correlation coefficient rho is the parameter that arises naturally in discussions of tests of association between markers and genetic diseases. A general treatment of association tests makes use of the additive and nonadditive components of variance for the disease gene. In almost all expressions that describe the behavior of association tests, additive variance components are modified by the squared correlation coefficient rho2 and the nonadditive variance components by rho4, suggesting that nonadditive components have less influence than additive components on association tests.
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false General. 22.1002-1 Section 22.1002-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SOCIOECONOMIC....1002-1 General. Service contracts over $2,500 shall contain mandatory provisions regarding minimum...