25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...
Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.
ERIC Educational Resources Information Center
Glutting, Joseph J.; McDermott, Paul A.
1990-01-01
Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…
Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.
2009-02-01
A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.
Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region
NASA Astrophysics Data System (ADS)
Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.
2005-08-01
Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.
Code of Federal Regulations, 2014 CFR
2014-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Code of Federal Regulations, 2013 CFR
2013-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
2014-03-27
42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-14
... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...
Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco
2002-01-01
The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...
Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mainemer, C. I.
1978-01-01
The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.
The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Goldstein, M. L.
2006-01-01
We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2013-07-01
Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Ways to improve your correlation functions
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.
Solution Methods for Certain Evolution Equations
NASA Astrophysics Data System (ADS)
Vega-Guzman, Jose Manuel
Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
GIS-based niche modeling for mapping species' habitats
Rotenberry, J.T.; Preston, K.L.; Knick, S.
2006-01-01
Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
A New Method for Estimating the Effective Population Size from Allele Frequency Changes
Pollak, Edward
1983-01-01
A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Planer, Katarina; Hagel, Anja
2018-01-01
A validity test was conducted to determine how care level–based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level–based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls. PMID:29442533
Brühl, Albert; Planer, Katarina; Hagel, Anja
2018-01-01
A validity test was conducted to determine how care level-based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level-based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
Large amplitude MHD waves upstream of the Jovian bow shock
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.
1983-01-01
Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Ryan, S E; Blasi, D A; Anglin, C O; Bryant, A M; Rickard, B A; Anderson, M P; Fike, K E
2010-07-01
Use of electronic animal identification technologies by livestock managers is increasing, but performance of these technologies can be variable when used in livestock production environments. This study was conducted to determine whether 1) read distance of low-frequency radio frequency identification (RFID) transceivers is affected by type of transponder being interrogated; 2) read distance variation of low-frequency RFID transceivers is affected by transceiver manufacturer; and 3) read distance of various transponder-transceiver manufacturer combinations meet the 2004 United States Animal Identification Plan (USAIP) bovine standards subcommittee minimum read distance recommendation of 60 cm. Twenty-four transceivers (n = 5 transceivers per manufacturer for Allflex, Boontech, Farnam, and Osborne; n = 4 transceivers for Destron Fearing) were tested with 60 transponders [n = 10 transponders per type for Allflex full duplex B (FDX-B), Allflex half duplex (HDX), Destron Fearing FDX-B, Farnam FDX-B, and Y-Tex FDX-B; n = 6 for Temple FDX-B (EM Microelectronic chip); and n = 4 for Temple FDX-B (HiTag chip)] presented in the parallel orientation. All transceivers and transponders met International Organization for Standardization 11784 and 11785 standards. Transponders represented both one-half duplex and full duplex low-frequency air interface technologies. Use of a mechanical trolley device enabled the transponders to be presented to the center of each transceiver at a constant rate, thereby reducing human error. Transponder and transceiver manufacturer interacted (P < 0.0001) to affect read distance, indicating that transceiver performance was greatly dependent upon the transponder type being interrogated. Twenty-eight of 30 combinations of transceivers and transponders evaluated met the minimum recommended USAIP read distance. The mean read distance across all 30 combinations was 45.1 to 129.4 cm. Transceiver manufacturer and transponder type interacted to affect read distance variance (P < 0.05). Maximum read distance performance of low-frequency RFID technologies with low variance can be achieved by selecting specific transponder-transceiver combinations.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
Stream-temperature patterns of the Muddy Creek basin, Anne Arundel County, Maryland
Pluhowski, E.J.
1981-01-01
Using a water-balance equation based on a 4.25-year gaging-station record on North Fork Muddy Creek, the following mean annual values were obtained for the Muddy Creek basin: precipitation, 49.0 inches; evapotranspiration, 28.0 inches; runoff, 18.5 inches; and underflow, 2.5 inches. Average freshwater outflow from the Muddy Creek basin to the Rhode River estuary was 12.2 cfs during the period October 1, 1971, to December 31, 1975. Harmonic equations were used to describe seasonal maximum and minimum stream-temperature patterns at 12 sites in the basin. These equations were fitted to continuous water-temperature data obtained periodically at each site between November 1970 and June 1978. The harmonic equations explain at least 78 percent of the variance in maximum stream temperatures and 81 percent of the variance in minimum temperatures. Standard errors of estimate averaged 2.3C (Celsius) for daily maximum water temperatures and 2.1C for daily minimum temperatures. Mean annual water temperatures developed for a 5.4-year base period ranged from 11.9C at Muddy Creek to 13.1C at Many Fork Branch. The largest variations in stream temperatures were detected at thermograph sites below ponded reaches and where forest coverage was sparse or missing. At most sites the largest variations in daily water temperatures were recorded in April whereas the smallest were in September and October. The low thermal inertia of streams in the Muddy Creek basin tends to amplify the impact of surface energy-exchange processes on short-period stream-temperature patterns. Thus, in response to meteorologic events, wide ranging stream-temperature perturbations of as much as 6C have been documented in the basin. (USGS)
Sleep and nutritional deprivation and performance of house officers.
Hawkins, M R; Vichick, D A; Silsby, H D; Kruzich, D J; Butler, R
1985-07-01
A study was conducted by the authors to compare cognitive functioning in acutely and chronically sleep-deprived house officers. A multivariate analysis of variance revealed significant deficits in primary mental tasks involving basic rote memory, language, and numeric skills as well as in tasks requiring high-order cognitive functioning and traditional intellective abilities. These deficits existed only for the acutely sleep-deprived group. The finding of deficits in individuals who reported five hours or less of sleep in a 24-hour period suggests that the minimum standard of four hours that has been considered by some to be adequate for satisfactory performance may be insufficient for more complex cognitive functioning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis, Alfredo
The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.
Multifractal Properties of Process Control Variables
NASA Astrophysics Data System (ADS)
Domański, Paweł D.
2017-06-01
Control system is an inevitable element of any industrial installation. Its quality affects overall process performance significantly. The assessment, whether control system needs any improvement or not, requires relevant and constructive measures. There are various methods, like time domain based, Minimum Variance, Gaussian and non-Gaussian statistical factors, fractal and entropy indexes. Majority of approaches use time series of control variables. They are able to cover many phenomena. But process complexities and human interventions cause effects that are hardly visible for standard measures. It is shown that the signals originating from industrial installations have multifractal properties and such an analysis may extend standard approach to further observations. The work is based on industrial and simulation data. The analysis delivers additional insight into the properties of control system and the process. It helps to discover internal dependencies and human factors, which are hardly detectable.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Louis A; Mason, John J.
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less
A method for minimum risk portfolio optimization under hybrid uncertainty
NASA Astrophysics Data System (ADS)
Egorova, Yu E.; Yazenin, A. V.
2018-03-01
In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.
A Fast and Robust Beamspace Adaptive Beamformer for Medical Ultrasound Imaging.
Mohades Deylami, Ali; Mohammadzadeh Asl, Babak
2017-06-01
Minimum variance beamformer (MVB) increases the resolution and contrast of medical ultrasound imaging compared with nonadaptive beamformers. These advantages come at the expense of high computational complexity that prevents this adaptive beamformer to be applied in a real-time imaging system. A new beamspace (BS) based on discrete cosine transform is proposed in which the medical ultrasound signals can be represented with less dimensions compared with the standard BS. This is because of symmetric beampattern of the beams in the proposed BS compared with the asymmetric ones in the standard BS. This lets us decrease the dimensions of data to two, so a high complex algorithm, such as the MVB, can be applied faster in this BS. The results indicated that by keeping only two beams, the MVB in the proposed BS provides very similar resolution and also better contrast compared with the standard MVB (SMVB) with only 0.44% of needed flops. Also, this beamformer is more robust against sound speed estimation errors than the SMVB.
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E
2013-04-01
Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Applications of GARCH models to energy commodities
NASA Astrophysics Data System (ADS)
Humphreys, H. Brett
This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric utility industry is operating close to the minimum variance position, a shift towards coal consumption would reduce price volatility for overall U.S. energy consumption. With the inclusion of potential externality costs, the shift remains away from oil but towards natural gas instead of coal.
Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George
2013-01-01
The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101
NASA Astrophysics Data System (ADS)
Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza
2018-02-01
In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
Some refinements on the comparison of areal sampling methods via simulation
Jeffrey Gove
2017-01-01
The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...
A comparison of coronal and interplanetary current sheet inclinations
NASA Technical Reports Server (NTRS)
Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.
1983-01-01
The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136
Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... RTCA Special Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision...
Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems
NASA Technical Reports Server (NTRS)
Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.
2011-01-01
This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.
5 CFR 890.202 - Minimum standards for health benefits carriers.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Minimum standards for health benefits... SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.202 Minimum standards for health benefits carriers. The minimum standards for health benefits carriers for the...
5 CFR 890.202 - Minimum standards for health benefits carriers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Minimum standards for health benefits... SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.202 Minimum standards for health benefits carriers. The minimum standards for health benefits carriers for the...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Committee 147 meeting: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance... RTCA Special Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-26
... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... RTCA Special Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-19
... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Committee 147 meeting: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance... RTCA Special Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated entity... confidential information in reaching a decision on a variance application. Interested members of the public...
Thomas-Gibson, Siwan; Bugajski, Marek; Bretthauer, Michael; Rees, Colin J; Dekker, Evelien; Hoff, Geir; Jover, Rodrigo; Suchanek, Stepan; Ferlitsch, Monika; Anderson, John; Roesch, Thomas; Hultcranz, Rolf; Racz, Istvan; Kuipers, Ernst J; Garborg, Kjetil; East, James E; Rupinski, Maciej; Seip, Birgitte; Bennett, Cathy; Senore, Carlo; Minozzi, Silvia; Bisschops, Raf; Domagk, Dirk; Valori, Roland; Spada, Cristiano; Hassan, Cesare; Dinis-Ribeiro, Mario; Rutter, Matthew D
2017-01-01
The European Society of Gastrointestinal Endoscopy and United European Gastroenterology present a short list of key performance measures for lower gastrointestinal endoscopy. We recommend that endoscopy services across Europe adopt the following seven key performance measures for lower gastrointestinal endoscopy for measurement and evaluation in daily practice at a center and endoscopist level: 1 rate of adequate bowel preparation (minimum standard 90%); 2 cecal intubation rate (minimum standard 90%); 3 adenoma detection rate (minimum standard 25%); 4 appropriate polypectomy technique (minimum standard 80%); 5 complication rate (minimum standard not set); 6 patient experience (minimum standard not set); 7 appropriate post-polypectomy surveillance recommendations (minimum standard not set). Other identified performance measures have been listed as less relevant based on an assessment of their importance, scientific acceptability, feasibility, usability, and comparison to competing measures. PMID:28507745
24 CFR 200.933 - Changes in minimum property standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Register. As the changes are made, they will be incorporated into the volumes of the Minimum Property... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Changes in minimum property... Changes in minimum property standards. Changes in the Minimum Property Standards will generally be made...
24 CFR 200.933 - Changes in minimum property standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Register. As the changes are made, they will be incorporated into the volumes of the Minimum Property... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Changes in minimum property... Changes in minimum property standards. Changes in the Minimum Property Standards will generally be made...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... Information Collection: Comment Request; Minimum Property Standards for Multifamily and Care-Type Facilities...: Minimum Property Standards for Multifamily and Care-type facilities. OMB Control Number, if applicable... Housing and Urban Development (HUD) developed the Minimum Property Standards (MPS) program in order to...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 8 2013-01-01 2013-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 8 2011-01-01 2011-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 8 2014-01-01 2014-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 8 2012-01-01 2012-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
A test of source-surface model predictions of heliospheric current sheet inclination
NASA Technical Reports Server (NTRS)
Burton, M. E.; Crooker, N. U.; Siscoe, G. L.; Smith, E. J.
1994-01-01
The orientation of the heliospheric current sheet predicted from a source surface model is compared with the orientation determined from minimum-variance analysis of International Sun-Earth Explorer (ISEE) 3 magnetic field data at 1 AU near solar maximum. Of the 37 cases analyzed, 28 have minimum variance normals that lie orthogonal to the predicted Parker spiral direction. For these cases, the correlation coefficient between the predicted and measured inclinations is 0.6. However, for the subset of 14 cases for which transient signatures (either interplanetary shocks or bidirectional electrons) are absent, the agreement in inclinations improves dramatically, with a correlation coefficient of 0.96. These results validate not only the use of the source surface model as a predictor but also the previously questioned usefulness of minimum variance analysis across complex sector boundaries. In addition, the results imply that interplanetary dynamics have little effect on current sheet inclination at 1 AU. The dependence of the correlation on transient occurrence suggests that the leading edge of a coronal mass ejection (CME), where transient signatures are detected, disrupts the heliospheric current sheet but that the sheet re-forms between the trailing legs of the CME. In this way the global structure of the heliosphere, reflected both in the source surface maps and in the interplanetary sector structure, can be maintained even when the CME occurrence rate is high.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who cannot... reaching a decision on a variance application. Interested members of the public will be allowed a...
Vegetation greenness impacts on maximum and minimum temperatures in northeast Colorado
Hanamean, J. R.; Pielke, R.A.; Castro, C. L.; Ojima, D.S.; Reed, Bradley C.; Gao, Z.
2003-01-01
The impact of vegetation on the microclimate has not been adequately considered in the analysis of temperature forecasting and modelling. To fill part of this gap, the following study was undertaken.A daily 850–700 mb layer mean temperature, computed from the National Center for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis, and satellite-derived greenness values, as defined by NDVI (Normalised Difference Vegetation Index), were correlated with surface maximum and minimum temperatures at six sites in northeast Colorado for the years 1989–98. The NDVI values, representing landscape greenness, act as a proxy for latent heat partitioning via transpiration. These sites encompass a wide array of environments, from irrigated-urban to short-grass prairie. The explained variance (r2 value) of surface maximum and minimum temperature by only the 850–700 mb layer mean temperature was subtracted from the corresponding explained variance by the 850–700 mb layer mean temperature and NDVI values. The subtraction shows that by including NDVI values in the analysis, the r2 values, and thus the degree of explanation of the surface temperatures, increase by a mean of 6% for the maxima and 8% for the minima over the period March–October. At most sites, there is a seasonal dependence in the explained variance of the maximum temperatures because of the seasonal cycle of plant growth and senescence. Between individual sites, the highest increase in explained variance occurred at the site with the least amount of anthropogenic influence. This work suggests the vegetation state needs to be included as a factor in surface temperature forecasting, numerical modeling, and climate change assessments.
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2001-01-01
The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.
78 FR 21060 - Appeal Proceedings Before the Commission
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-09
... adoption of alternate standards from those required by the Commission's minimum internal control standards... adoption of alternate standards from those required by the Commission's minimum internal control standards... TGRAs' adoption of alternate standards from those required by the Commission's minimum internal control...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum standards for game artwork, glass... HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.16 What are the minimum standards for game artwork, glass, and rules? This section provides...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum standards for game artwork, glass... HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.16 What are the minimum standards for game artwork, glass, and rules? This section provides...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum standards for game artwork, glass... HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.16 What are the minimum standards for game artwork, glass, and rules? This section provides...
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
25 CFR 542.14 - What are the minimum internal control standards for the cage?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for the cage? 542.14 Section 542.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.14 What are the minimum internal control standards for the cage? (a) Computer applications. For...
25 CFR 543.8 - What are the minimum internal control standards for bingo?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for bingo? 543.8 Section 543.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.8 What are the minimum internal control standards for bingo? (a) Supervision....
25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum internal control standards for complimentary services or items? 542.17 Section 542.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary...
25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for complimentary services or items? 542.17 Section 542.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary...
25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for complimentary services or items? 542.17 Section 542.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary...
25 CFR 543.8 - What are the minimum internal control standards for bingo?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for bingo? 543.8 Section 543.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.8 What are the minimum internal control standards for bingo? (a) Supervision....
25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for complimentary services or items? 542.17 Section 542.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Standards § 200.925a Multifamily and care-type minimum property standards. (a) Construction standards. Multifamily or care-type properties shall comply with the minimum property standards contained in the handbook.... If the developer or other interested party so chooses, then the multifamily or care-type property...
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Standards § 200.925a Multifamily and care-type minimum property standards. (a) Construction standards. Multifamily or care-type properties shall comply with the minimum property standards contained in the handbook.... If the developer or other interested party so chooses, then the multifamily or care-type property...
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Standards § 200.925a Multifamily and care-type minimum property standards. (a) Construction standards. Multifamily or care-type properties shall comply with the minimum property standards contained in the handbook.... If the developer or other interested party so chooses, then the multifamily or care-type property...
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Standards § 200.925a Multifamily and care-type minimum property standards. (a) Construction standards. Multifamily or care-type properties shall comply with the minimum property standards contained in the handbook.... If the developer or other interested party so chooses, then the multifamily or care-type property...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-21
... 228--Minimum Operational Performance Standards for Unmanned Aircraft Systems AGENCY: Federal Aviation...--Minimum Operational Performance Standards for Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 228--Minimum Operational Performance...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-25
... 228--Minimum Operational Performance Standards for Unmanned Aircraft Systems AGENCY: Federal Aviation...--Minimum Operational Performance Standards for Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 228--Minimum Operational Performance...
12 CFR 564.4 - Minimum appraisal standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum appraisal standards. 564.4 Section 564.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPRAISALS § 564.4 Minimum appraisal standards. For federally related transactions, all appraisals shall, at a minimum: (a...
25 CFR 543.17 - What are the minimum internal control standards for drop and count?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for drop and count? 543.17 Section 543.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.17 What are the minimum internal control standards for drop and count?...
25 CFR 543.15 - What are the minimum internal control standards for lines of credit?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for lines of credit? 543.15 Section 543.15 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.15 What are the minimum internal control standards for lines of credi...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for internal audit for Tier B gaming operations? 542.32 Section 542.32 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are the minimum internal control standards for...
25 CFR 543.9 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for pull tabs? 543.9 Section 543.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.9 What are the minimum internal control standards for pull tabs? (a)...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for internal audit for Tier A gaming operations? 542.22 Section 542.22 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are the minimum internal control standards for...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for internal audit for Tier A gaming operations? 542.22 Section 542.22 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are the minimum internal control standards for...
25 CFR 543.13 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for complimentary services or items? 543.13 Section 543.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.13 What are the minimum internal control standards fo...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for internal audit for Tier A gaming operations? 542.22 Section 542.22 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are the minimum internal control standards for...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum internal control standards for internal audit for Tier A gaming operations? 542.22 Section 542.22 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are the minimum internal control standards for...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum internal control standards for internal audit for Tier B gaming operations? 542.32 Section 542.32 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are the minimum internal control standards for...
25 CFR 543.9 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for pull tabs? 543.9 Section 543.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.9 What are the minimum internal control standards for pull tabs? (a)...
25 CFR 543.15 - What are the minimum internal control standards for lines of credit?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for lines of credit? 543.15 Section 543.15 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.15 What are the minimum internal control standards for lines of credi...
25 CFR 543.17 - What are the minimum internal control standards for drop and count?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for drop and count? 543.17 Section 543.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.17 What are the minimum internal control standards for drop and count?...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for internal audit for Tier B gaming operations? 542.32 Section 542.32 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are the minimum internal control standards for...
25 CFR 543.13 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for complimentary services or items? 543.13 Section 543.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.13 What are the minimum internal control standards fo...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for internal audit for Tier B gaming operations? 542.32 Section 542.32 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are the minimum internal control standards for...
77 FR 58707 - Minimum Internal Control Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-21
... Gaming Commission 25 CFR Part 543 Minimum Internal Control Standards; Final Rule #0;#0;Federal Register... Control Standards AGENCY: National Indian Gaming Commission, Interior. ACTION: Final rule. SUMMARY: The National Indian Gaming Commission (NIGC) amends its minimum internal control standards for Class II gaming...
77 FR 43196 - Minimum Internal Control Standards and Technical Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
... NATIONAL INDIAN GAMING COMMISSION 25 CFR Parts 543 and 547 Minimum Internal Control Standards [email protected] . SUPPLEMENTARY INFORMATION: Part 543 addresses minimum internal control standards (MICS) for Class II gaming operations. The regulations require tribes to establish controls and implement...
78 FR 11793 - Minimum Internal Control Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-20
... Internal Control Standards AGENCY: National Indian Gaming Commission. ACTION: Proposed rule. SUMMARY: The National Indian Gaming Commission (NIGC) proposes to amend its minimum internal control standards for Class... NIGC published a final rule in the Federal Register called Minimum Internal Control Standards. 64 FR...
Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-02-01
Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
77 FR 32444 - Minimum Internal Control Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
... Internal Control Standards AGENCY: National Indian Gaming Commission. ACTION: Proposed rule. SUMMARY: The National Indian Gaming Commission (NIGC) proposes to amend its minimum internal control standards for Class... the Federal Register called Minimum Internal Control Standards. 64 FR 590. The rule added a new part...
On the design of classifiers for crop inventories
NASA Technical Reports Server (NTRS)
Heydorn, R. P.; Takacs, H. C.
1986-01-01
Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.
Reliability analysis of the objective structured clinical examination using generalizability theory.
Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián
2016-01-01
The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
Reliability analysis of the objective structured clinical examination using generalizability theory.
Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián
2016-01-01
Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.
2010-01-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273
Minimum-variance Brownian motion control of an optically trapped probe.
Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang
2009-10-20
This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
River meanders - Theory of minimum variance
Langbein, Walter Basil; Leopold, Luna Bergere
1966-01-01
Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
12 CFR 3.11 - Standards for determination of appropriate individual minimum capital ratios.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Standards for determination of appropriate individual minimum capital ratios. 3.11 Section 3.11 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Establishment of Minimum Capital Ratios for an Individual Bank § 3.11 Standards...
40 CFR 131.6 - Minimum requirements for water quality standards submission.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Minimum requirements for water quality standards submission. 131.6 Section 131.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS General Provisions § 131.6 Minimum requirements for water quality standards submission. The...
40 CFR 131.6 - Minimum requirements for water quality standards submission.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Minimum requirements for water quality standards submission. 131.6 Section 131.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS General Provisions § 131.6 Minimum requirements for water quality standards submission. The...
RFI in hybrid loops - Simulation and experimental results.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.
1972-01-01
A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Charged particle tracking at Titan, and further applications
NASA Astrophysics Data System (ADS)
Bebesi, Zsofia; Erdos, Geza; Szego, Karoly
2016-04-01
We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.
Microstructure of the IMF turbulences at 2.5 AU
NASA Technical Reports Server (NTRS)
Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.
1995-01-01
A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.
Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S
2016-08-31
Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.
Comparison of the in vitro Effect of Chemical and Herbal Mouthwashes on Candida albicans
Talebi, Somayeh; Sabokbar, Azar; Riazipour, Majid; Saffari, Mohsen
2014-01-01
Background: During the recent decades research has focused to find scientific evidence for the effects of herbal medicines. Researchers are interested in herbal remedies for medication and aim to substitute herbal material instead of chemical formula with limited side effects for human being. Objectives: The aim of the current study was to compare the in vitro effect of herbal and chemical mouthwashes against Candida albicans. Materials and Methods: In this research, we used a standard strain of C. albicans, PTCC 5027. The suspension was made by a fresh culture of C. albicans (24 hours) and the optical density (turbidity equating to a McFarland standard of 0.5) was read at 530 nm. The C. albicans suspension was cultured on Sabouraud dextrose agar plate. Next, two wells were filled with mouthwashes and after incubation at 30ºC for 24 hours, the inhibition zone was measured. Minimum inhibitory concentration (MIC) and minimum fungicidal concentration (MFC) of mouthwashes were determined. Data were analyzed using the SPSS software, independent T-tests and one-sided variance analysis (ANOVA-one way). Results: Based on these findings on agar diffusion with (P = 0.764), MIC and MFC tests (P = 0.879), there were no significant differences between the antifungal effect of herbal and chemical mouthwashes. Conclusions: This study showed that, chemical mouthwashes acted better than herbal mouthwashes and among different chemical mouthwashes, Oral B was most effective. PMID:25741429
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequencemore » (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; ...
2017-08-08
Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for patron deposit accounts and cashless systems? 543.14 Section 543.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.14 What are the minimum internal control...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for gaming promotions and player tracking systems? 543.12 Section 543.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.12 What are the minimum internal contro...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for patron deposit accounts and cashless systems? 543.14 Section 543.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.14 What are the minimum internal control...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for gaming promotions and player tracking systems? 543.12 Section 543.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.12 What are the minimum internal contro...
Determination of the optimal level for combining area and yield estimates
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.
1981-01-01
Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-09
...). Accordingly, AMS published a notice of review and request for written comments on the Standards in the April...; FV10-996-610 Review] Minimum Quality and Handling Standards for Domestic and Imported Peanuts Marketed... the Minimum Quality and Handling Standards for Domestic and Imported Peanuts Marketed in the United...
Code of Federal Regulations, 2010 CFR
2010-04-01
... CLASS II GAMES § 547.17 How does a tribal gaming regulatory authority apply for a variance from these... 25 Indians 2 2010-04-01 2010-04-01 false How does a tribal gaming regulatory authority apply for a variance from these standards? 547.17 Section 547.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT...
ERIC Educational Resources Information Center
Vardeman, Stephen B.; Wendelberger, Joanne R.
2005-01-01
There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…
Code of Federal Regulations, 2012 CFR
2012-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Code of Federal Regulations, 2013 CFR
2013-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Code of Federal Regulations, 2011 CFR
2011-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Code of Federal Regulations, 2010 CFR
2010-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Code of Federal Regulations, 2014 CFR
2014-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Goldfarb, Charles A; Strauss, Nicole L; Wall, Lindley B; Calfee, Ryan P
2011-02-01
The measurement technique for ulnar variance in the adolescent population has not been well established. The purpose of this study was to assess the reliability of a standard ulnar variance assessment in the adolescent population. Four orthopedic surgeons measured 138 adolescent wrist radiographs for ulnar variance using a standard technique. There were 62 male and 76 female radiographs obtained in a standardized fashion for subjects aged 12 to 18 years. Skeletal age was used for analysis. We determined mean variance and assessed for differences related to age and gender. We also determined the interrater reliability. The mean variance was -0.7 mm for boys and -0.4 mm for girls; there was no significant difference between the 2 groups overall. When subdivided by age and gender, the younger group (≤ 15 y of age) was significantly less negative for girls (boys, -0.8 mm and girls, -0.3 mm, p < .05). There was no significant difference between boys and girls in the older group. The greatest difference between any 2 raters was 1 mm; exact agreement was obtained in 72 subjects. Correlations between raters were high (r(p) 0.87-0.97 in boys and 0.82-0.96 for girls). Interrater reliability was excellent (Cronbach's alpha, 0.97-0.98). Standard assessment techniques for ulnar variance are reliable in the adolescent population. Open growth plates did not interfere with this assessment. Young adolescent boys demonstrated a greater degree of negative ulnar variance compared with young adolescent girls. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
5 CFR 890.201 - Minimum standards for health benefits plans.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Minimum standards for health benefits... SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.201 Minimum standards for health benefits plans. (a) To qualify for approval by OPM, a health benefits plan...
5 CFR 890.201 - Minimum standards for health benefits plans.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Minimum standards for health benefits... SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.201 Minimum standards for health benefits plans. (a) To qualify for approval by OPM, a health benefits plan...
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... requirements to maintain minimum standards for Tribe/Consortium management systems? 1000.396 Section 1000.396... AGREEMENTS UNDER THE TRIBAL SELF-GOVERNMENT ACT AMENDMENTS TO THE INDIAN SELF-DETERMINATION AND EDUCATION ACT... minimum standards for Tribe/Consortium management systems? Yes, the Tribe/Consortium must maintain...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... Proposed Information Collection to OMB Minimum Property Standards for Multifamily and Care-Type Occupancy... Lists the Following Information Title of Proposal: Minimum Property Standards for Multifamily and Care-Type Occupancy Housing. OMB Approval Number: 2502-0321. Form Numbers: None. Description of the Need for...
USDA-ARS?s Scientific Manuscript database
We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the minimum information about any (x) sequence (MIxS). The standards are the minimum information about a single amplified genome (MISAG) and the ...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
... Parts 30 and 3400 SAFE Mortgage Licensing Act: Minimum Licensing Standards and Oversight... No. FR-5271-F-03] RIN 2502-A170 SAFE Mortgage Licensing Act: Minimum Licensing Standards and... pursuant to the Secure and Fair Enforcement Mortgage Licensing Act of 2008 (SAFE Act or Act), to ensure...
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a written...) The CSO may forward the application to the Chief Health, Safety and Security Officer. (2) If the CSO...
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.
25 CFR 547.13 - What are the minimum technical standards for program storage media?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum technical standards for program storage media? 547.13 Section 547.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum standards for game artwork, glass... the minimum standards for game artwork, glass, and rules? (a) Rules, instructions, and prize schedules...: (1) Game name, rules, and options such as the purchase or wager amount stated clearly and...
25 CFR 547.13 - What are the minimum technical standards for program storage media?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum technical standards for program storage media? 547.13 Section 547.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES...
25 CFR 547.13 - What are the minimum technical standards for program storage media?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum technical standards for program storage media? 547.13 Section 547.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum standards for game artwork, glass... the minimum standards for game artwork, glass, and rules? (a) Rules, instructions, and prize schedules...: (1) Game name, rules, and options such as the purchase or wager amount stated clearly and...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 4 2011-01-01 2011-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
Code of Federal Regulations, 2010 CFR
2010-04-01
... and enabling Class II gaming system components? 547.6 Section 547.6 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.6 What are the minimum technical standards for enrolling and...
48 CFR 22.1002-4 - Application of the Fair Labor Standards Act minimum wage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Application of the Fair Labor Standards Act minimum wage. 22.1002-4 Section 22.1002-4 Federal Acquisition Regulations System... Service Contract Act of 1965, as Amended 22.1002-4 Application of the Fair Labor Standards Act minimum...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
Code of Federal Regulations, 2010 CFR
2010-04-01
... reconciliation process; (ii) Pull tabs, including but not limited to, statistical records, winner verification... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for... COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.42 What are...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 5 2013-01-01 2013-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
25 CFR 543.23 - What are the minimum internal control standards for audit and accounting?
Code of Federal Regulations, 2014 CFR
2014-04-01
... supervision, bingo cards, bingo card sales, draw, prize payout; cash and equivalent controls, technologic aids... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for audit... INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.23 What are the...
25 CFR 543.23 - What are the minimum internal control standards for audit and accounting?
Code of Federal Regulations, 2013 CFR
2013-04-01
... supervision, bingo cards, bingo card sales, draw, prize payout; cash and equivalent controls, technologic aids... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for audit... INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.23 What are the...
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
Minimum reporting standards for clinical research on groin pain in athletes
Delahunt, Eamonn; Thorborg, Kristian; Khan, Karim M; Robinson, Philip; Hölmich, Per; Weir, Adam
2015-01-01
Groin pain in athletes is a priority area for sports physiotherapy and sports medicine research. Heterogeneous studies with low methodological quality dominate research related to groin pain in athletes. Low-quality studies undermine the external validity of research findings and limit the ability to generalise findings to the target patient population. Minimum reporting standards for research on groin pain in athletes are overdue. We propose a set of minimum reporting standards based on best available evidence to be utilised in future research on groin pain in athletes. Minimum reporting standards are provided in relation to: (1) study methodology, (2) study participants and injury history, (3) clinical examination, (4) clinical assessment and (5) radiology. Adherence to these minimum reporting standards will strengthen the quality and transparency of research conducted on groin pain in athletes. This will allow an easier comparison of outcomes across studies in the future. PMID:26031644
Once upon Multivariate Analyses: When They Tell Several Stories about Biological Evolution.
Renaud, Sabrina; Dufour, Anne-Béatrice; Hardouin, Emilie A; Ledevin, Ronan; Auffray, Jean-Christophe
2015-01-01
Geometric morphometrics aims to characterize of the geometry of complex traits. It is therefore by essence multivariate. The most popular methods to investigate patterns of differentiation in this context are (1) the Principal Component Analysis (PCA), which is an eigenvalue decomposition of the total variance-covariance matrix among all specimens; (2) the Canonical Variate Analysis (CVA, a.k.a. linear discriminant analysis (LDA) for more than two groups), which aims at separating the groups by maximizing the between-group to within-group variance ratio; (3) the between-group PCA (bgPCA) which investigates patterns of between-group variation, without standardizing by the within-group variance. Standardizing within-group variance, as performed in the CVA, distorts the relationships among groups, an effect that is particularly strong if the variance is similarly oriented in a comparable way in all groups. Such shared direction of main morphological variance may occur and have a biological meaning, for instance corresponding to the most frequent standing genetic variation in a population. Here we undertake a case study of the evolution of house mouse molar shape across various islands, based on the real dataset and simulations. We investigated how patterns of main variance influence the depiction of among-group differentiation according to the interpretation of the PCA, bgPCA and CVA. Without arguing about a method performing 'better' than another, it rather emerges that working on the total or between-group variance (PCA and bgPCA) will tend to put the focus on the role of direction of main variance as line of least resistance to evolution. Standardizing by the within-group variance (CVA), by dampening the expression of this line of least resistance, has the potential to reveal other relevant patterns of differentiation that may otherwise be blurred.
Hassenbusch, S J; Colvin, O M; Anderson, J H
1995-07-01
A relatively simple, high-sensitivity gas chromatographic assay is described for nitrosourea compounds, such as BCNU [1,3-bis(2-chloroethyl)-1-nitrosourea] and MeCCNU [1-(2-chloroethyl)-3-(trans-4-methylcyclohexyl)-1-nitrosourea], in small biopsy samples of brain and other tissues. After extraction with ethyl acetate, secondary amines in BCNU and MeCCNU are derivatized with trifluoroacetic anhydride. Compounds are separated and quantitated by gas chromatography using a capillary column with temperature programming and an electron capture detector. Standard curves of BCNU indicate a coefficient of variance of 0.066 +/- 0.018, a correlation coefficient of 0.929, and an extraction efficiency from whole brain of 68% with a minimum detectable amount of 20 ng in 5-10 mg samples. The assay has been facile and sensitive in over 1000 brain biopsy specimens after intravenous and intraarterial infusions of BCNU.
NASA Astrophysics Data System (ADS)
McPhee, Sidney A.
1985-12-01
This study was designed to survey and compare attitudes and perceptions toward school counseling and student personnel programs as held by educators in the Caribbean. The subjects in the study comprised 275 teachers and administrators employed in public and private junior and senior high schools in Nassau, Bahamas. The statistical tests used to analyze the data were the Kruskal-Wallis one-way analysis of variance and the Friedman two-way analysis for repeated measures. The findings indicate that administrators at all levels expressed significantly more favorable attitudes and perceptions toward counseling and student personnel programs in the schools than teachers. Teachers in the study expressed the following: (a) serious concern regarding the competency of practicing counselors in their schools; (b) a need for clarification of their role and function in the guidance process and a clarification of the counselor's role; and (c) minimum acceptable standards should be established for school counseling positions.
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032
Code of Federal Regulations, 2010 CFR
2010-04-01
... but not limited to, bingo card control, payout procedures, and cash reconciliation process; (ii) Pull... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for... COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are...
Code of Federal Regulations, 2010 CFR
2010-04-01
... but not limited to, bingo card control, payout procedures, and cash reconciliation process; (ii) Pull... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for... COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are...
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Noise level in a neonatal intensive care unit in Santa Marta - Colombia.
Garrido Galindo, Angélica Patricia; Camargo Caicedo, Yiniva; Velez-Pereira, Andres M
2017-09-30
The environment of neonatal intensive care units is influenced by numerous sources of noise emission, which contribute to raise the noise levels, and may cause hearing impairment and other physiological and psychological changes on the newborn, as well as problems with care staff. To evaluate the level and sources of noise in the neonatal intensive care unit. Sampled for 20 consecutive days every 60 seconds in A-weighting curves and fast mode with a Type I sound level meter. Recorded the average, maximum and minimum, and the 10th, 50th and 90th percentiles. The values are integrated into hours and work shift, and studied by analysis of variance. The sources were characterized in thirds of octaves. The average level was 64.00 ±3.62 dB(A), with maximum of 76.04 ±5.73 dB(A), minimum of 54.84 ±2.61dB(A), and background noise of 57.95 ±2.83 dB(A). We found four sources with levels between 16.8-63.3 dB(A). Statistical analysis showed significant differences between the hours and work shift, with higher values in the early hours of the day. The values presented exceed the standards suggested by several organizations. The sources identified and measured recorded high values in low frequencies.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-19
..., Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, VA 22209-3939. (4) Hand Delivery or Courier: MSHA, Office of Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350... CONTACT: Mario Distasio, Chief of the Economic Analysis Division, Office of Standards, Regulations, and...
36 CFR 28.13 - Variance, commercial and industrial application procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variance, commercial and industrial application procedures. 28.13 Section 28.13 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR FIRE ISLAND NATIONAL SEASHORE: ZONING STANDARDS Federal Standards and...
40 CFR 268.44 - Variance from a treatment standard.
Code of Federal Regulations, 2011 CFR
2011-07-01
... than) the concentrations necessary to minimize short- and long-term threats to human health and the... any given treatment variance is sufficient to minimize threats to human health and the environment... Selenium NA NA 25 mg/L TCLP NA. U.S. Ecology Idaho, Incorporated, Grandview, Idaho K08810 Standards under...
40 CFR 268.44 - Variance from a treatment standard.
Code of Federal Regulations, 2010 CFR
2010-07-01
... than) the concentrations necessary to minimize short- and long-term threats to human health and the... any given treatment variance is sufficient to minimize threats to human health and the environment... Selenium NA NA 25 mg/L TCLP NA. U.S. Ecology Idaho, Incorporated, Grandview, Idaho K08810 Standards under...
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
36 CFR 28.13 - Variance, commercial and industrial application procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variance, commercial and industrial application procedures. 28.13 Section 28.13 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR FIRE ISLAND NATIONAL SEASHORE: ZONING STANDARDS Federal Standards and...
48 CFR 9904.407-50 - Techniques for application.
Code of Federal Regulations, 2010 CFR
2010-10-01
... engineering studies, experience, or other supporting data) used in setting and revising standards; the period... their related variances may be recognized either at the time purchases of material are entered into the...-price standards are used and related variances are recognized at the time purchases of material are...
78 FR 63873 - Minimum Internal Control Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-25
... Internal Control Standards AGENCY: National Indian Gaming Commission, Interior. ACTION: Final rule. SUMMARY: The National Indian Gaming Commission (NIGC) amends its minimum internal control standards for Class... Internal Control Standards. 64 FR 590. The rule added a new part to the Commission's regulations...
NASA Astrophysics Data System (ADS)
Lizana, A.; Foldyna, M.; Stchakovsky, M.; Georges, B.; Nicolas, D.; Garcia-Caurel, E.
2013-03-01
High sensitivity of spectroscopic ellipsometry and reflectometry for the characterization of thin films can strongly decrease when layers, typically metals, absorb a significant fraction of the light. In this paper, we propose a solution to overcome this drawback using total internal reflection ellipsometry (TIRE) and exciting a surface longitudinal wave: a plasmon-polariton. As in the attenuated total reflectance technique, TIRE exploits a minimum in the intensity of reflected transversal magnetic (TM) polarized light and enhances the sensitivity of standard methods to thicknesses of absorbing films. Samples under study were stacks of three films, ZnO : Al/Ag/ZnO : Al, deposited on glass substrates. The thickness of the silver layer varied from sample to sample. We performed measurements with a UV-visible phase-modulated ellipsometer, an IR Mueller ellipsometer and a UV-NIR reflectometer. We used the variance-covariance formalism to evaluate the sensitivity of the ellipsometric data to different parameters of the optical model. Results have shown that using TIRE doubled the sensitivity to the silver layer thickness when compared with the standard ellipsometry. Moreover, the thickness of the ZnO : Al layer below the silver layer can be reliably quantified, unlike for the fit of the standard ellipsometry data, which is limited by the absorption of the silver layer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Are there any minimum employment standards for Indian country law enforcement personnel? 12.31 Section 12.31 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Qualifications and Training Requirements § 12.31 Are there any minimum employment standards...
Frey, Allison; Mika, Stephanie; Nuzum, Rachel; Schoen, Cathy
2009-06-01
Many proposed health insurance reforms would establish a federal minimum benefit standard--a baseline set of benefits to ensure that people have adequate coverage and financial protection when they purchase insurance. Currently, benefit mandates are set at the state level; these vary greatly across states and generally target specific areas rather than set an overall standard for what qualifies as health insurance. This issue brief considers what a broad federal minimum standard might look like by comparing existing state benefit mandates with the services and providers covered under the Federal Employees Health Benefits Program (FEHBP) Blue Cross and Blue Shield standard benefit package, an example of minimum creditable coverage that reflects current standard practice among employer-sponsored health plans. With few exceptions, benefits in the FEHBP standard option either meet or exceed those that state mandates require-indicating that a broad-based national benefit standard would include most existing state benefit mandates.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the minimum wage required by section 6(a) of the Fair Labor Standards Act? 520.200 Section 520.200... lower than the minimum wage required by section 6(a) of the Fair Labor Standards Act? Section 14(a) of..., for the payment of special minimum wage rates to workers employed as messengers, learners (including...
Minimum Standards for Tribal Child Care Centers.
ERIC Educational Resources Information Center
Administration on Children, Youth, and Families (DHHS), Washington, DC. Child Care Bureau.
These minimum standards for tribal child care centers are being issued as guidance. An interim period of at least 1 year will allow tribal agencies to identify implementation issues, ensure that the standards reflect tribal needs, and guarantee that the standards provide adequate protection for children. The standards will be issued as regulations…
7 CFR 51.311 - Marking requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... STANDARDS) United States Standards for Grades of Apples Marking Requirements § 51.311 Marking requirements... minimum diameter of apples packed in a closed container shall be indicated on the container. For apple... varieties, the minimum diameter and minimum weight of apples packed in a closed container shall be indicated...
7 CFR 51.311 - Marking requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... STANDARDS) United States Standards for Grades of Apples Marking Requirements § 51.311 Marking requirements... minimum diameter of apples packed in a closed container shall be indicated on the container. For apple... varieties, the minimum diameter and minimum weight of apples packed in a closed container shall be indicated...
7 CFR 51.311 - Marking requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... STANDARDS) United States Standards for Grades of Apples Marking Requirements § 51.311 Marking requirements... minimum diameter of apples packed in a closed container shall be indicated on the container. For apple... varieties, the minimum diameter and minimum weight of apples packed in a closed container shall be indicated...
Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)
NASA Technical Reports Server (NTRS)
Gaposchkin, E. M.
1972-01-01
The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.
A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)
2010-01-01
processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Milosavljevic, Stephan; McBride, David I; Bagheri, Nasser; Vasiljev, Radivoj M; Mani, Ramakrishnan; Carman, Allan B; Rehn, Borje
2011-04-01
The purpose of this study was to determine exposure to whole-body vibration (WBV) and mechanical shock in rural workers who use quad bikes and to explore how personal, physical, and workplace characteristics influence exposure. A seat pad mounted triaxial accelerometer and data logger recorded full workday vibration and shock data from 130 New Zealand rural workers. Personal, physical, and workplace characteristics were gathered using a modified version of the Whole Body Vibration Health Surveillance Questionnaire. WBVs and mechanical shocks were analysed in accordance with the International Standardization for Organization (ISO 2631-1 and ISO 2631-5) standards and are presented as vibration dose value (VDV) and mechanical shock (S(ed)) exposures. VDV(Z) consistently exceeded European Union (Guide to good practice on whole body vibration. Directive 2002/44/EC on minimum health and safety, European Commission Directorate General Employment, Social Affairs and Equal Opportunities. 2006) guideline exposure action thresholds with some workers exceeding exposure limit thresholds. Exposure to mechanical shock was also evident. Increasing age had the strongest (negative) association with vibration and shock exposure with body mass index (BMI) having a similar but weaker effect. Age, daily driving duration, dairy farming, and use of two rear shock absorbers created the strongest multivariate model explaining 33% of variance in VDV(Z). Only age and dairy farming combined to explain 17% of the variance for daily mechanical shock. Twelve-month prevalence for low back pain was highest at 57.7% and lowest for upper back pain (13.8%). Personal (age and BMI), physical (shock absorbers and velocity), and workplace characteristics (driving duration and dairy farming) suggest that a mix of engineered workplace and behavioural interventions is required to reduce this level of exposure to vibration and shock.
A Formula to Calculate Standard Liver Volume Using Thoracoabdominal Circumference.
Shaw, Brian I; Burdine, Lyle J; Braun, Hillary J; Ascher, Nancy L; Roberts, John P
2017-12-01
With the use of split liver grafts as well as living donor liver transplantation (LDLT) it is imperative to know the minimum graft volume to avoid complications. Most current formulas to predict standard liver volume (SLV) rely on weight-based measures that are likely inaccurate in the setting of cirrhosis. Therefore, we sought to create a formula for estimating SLV without weight-based covariates. LDLT donors underwent computed tomography scan volumetric evaluation of their livers. An optimal formula for calculating SLV using the anthropomorphic measure thoracoabdominal circumference (TAC) was determined using leave-one-out cross-validation. The ability of this formula to correctly predict liver volume was checked against other existing formulas by analysis of variance. The ability of the formula to predict small grafts in LDLT was evaluated by exact logistic regression. The optimal formula using TAC was determined to be SLV = (TAC × 3.5816) - (Age × 3.9844) - (Sex × 109.7386) - 934.5949. When compared to historic formulas, the current formula was the only one which was not significantly different than computed tomography determined liver volumes when compared by analysis of variance with Dunnett posttest. When evaluating the ability of the formula to predict small for size syndrome, many (10/16) of the formulas tested had significant results by exact logistic regression, with our formula predicting small for size syndrome with an odds ratio of 7.94 (95% confidence interval, 1.23-91.36; P = 0.025). We report a formula for calculating SLV that does not rely on weight-based variables that has good ability to predict SLV and identify patients with potentially small grafts.
Andronache, Adrian; Rosazza, Cristina; Sattin, Davide; Leonardi, Matilde; D'Incerti, Ludovico; Minati, Ludovico
2013-01-01
An emerging application of resting-state functional MRI (rs-fMRI) is the study of patients with disorders of consciousness (DoC), where integrity of default-mode network (DMN) activity is associated to the clinical level of preservation of consciousness. Due to the inherent inability to follow verbal instructions, arousal induced by scanning noise and postural pain, these patients tend to exhibit substantial levels of movement. This results in spurious, non-neural fluctuations of the rs-fMRI signal, which impair the evaluation of residual functional connectivity. Here, the effect of data preprocessing choices on the detectability of the DMN was systematically evaluated in a representative cohort of 30 clinically and etiologically heterogeneous DoC patients and 33 healthy controls. Starting from a standard preprocessing pipeline, additional steps were gradually inserted, namely band-pass filtering (BPF), removal of co-variance with the movement vectors, removal of co-variance with the global brain parenchyma signal, rejection of realignment outlier volumes and ventricle masking. Both independent-component analysis (ICA) and seed-based analysis (SBA) were performed, and DMN detectability was assessed quantitatively as well as visually. The results of the present study strongly show that the detection of DMN activity in the sub-optimal fMRI series acquired on DoC patients is contingent on the use of adequate filtering steps. ICA and SBA are differently affected but give convergent findings for high-grade preprocessing. We propose that future studies in this area should adopt the described preprocessing procedures as a minimum standard to reduce the probability of wrongly inferring that DMN activity is absent.
Husby, Arild; Gustafsson, Lars; Qvarnström, Anna
2012-01-01
The avian incubation period is associated with high energetic costs and mortality risks suggesting that there should be strong selection to reduce the duration to the minimum required for normal offspring development. Although there is much variation in the duration of the incubation period across species, there is also variation within species. It is necessary to estimate to what extent this variation is genetically determined if we want to predict the evolutionary potential of this trait. Here we use a long-term study of collared flycatchers to examine the genetic basis of variation in incubation duration. We demonstrate limited genetic variance as reflected in the low and nonsignificant additive genetic variance, with a corresponding heritability of 0.04 and coefficient of additive genetic variance of 2.16. Any selection acting on incubation duration will therefore be inefficient. To our knowledge, this is the first time heritability of incubation duration has been estimated in a natural bird population. © 2011 by The University of Chicago.
Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu
2007-01-01
As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
Missouri Minimum Standards for School Buses
ERIC Educational Resources Information Center
Nicastro, Chris L.
2008-01-01
The revised minimum standards for school bus chassis and school bus bodies have been prepared in conformity with the Revised Statutes of Missouri (RSMo) for school bus transportation. The standards recommended by the 2005 National Conference on School Transportation and the Federal Motor Vehicle Safety Standards (FMVSS) promulgated by the U. S.…
The volume-outcome relationship and minimum volume standards--empirical evidence for Germany.
Hentschker, Corinna; Mennicken, Roman
2015-06-01
For decades, there is an ongoing discussion about the quality of hospital care leading i.a. to the introduction of minimum volume standards in various countries. In this paper, we analyze the volume-outcome relationship for patients with intact abdominal aortic aneurysm and hip fracture. We define hypothetical minimum volume standards in both conditions and assess consequences for access to hospital services in Germany. The results show clearly that patients treated in hospitals with a higher case volume have on average a significant lower probability of death in both conditions. Furthermore, we show that the hypothetical minimum volume standards do not compromise overall access measured with changes in travel times. Copyright © 2014 John Wiley & Sons, Ltd.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni
2017-12-01
The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.
Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q
2017-03-22
Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.
Systems, Subjects, Sessions: To What Extent Do These Factors Influence EEG Data?
Melnik, Andrew; Legkov, Petr; Izdebski, Krzysztof; Kärcher, Silke M; Hairston, W David; Ferris, Daniel P; König, Peter
2017-01-01
Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses.
Systems, Subjects, Sessions: To What Extent Do These Factors Influence EEG Data?
Melnik, Andrew; Legkov, Petr; Izdebski, Krzysztof; Kärcher, Silke M.; Hairston, W. David; Ferris, Daniel P.; König, Peter
2017-01-01
Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses. PMID:28424600
29 CFR 1905.10 - Variances and other relief under section 6(b)(6)(A).
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Variances and other relief under section 6(b)(6)(A). 1905... section 6(b)(6)(A). (a) Application for variance. Any employer, or class of employers, desiring a variance from a standard, or portion thereof, authorized by section 6(b)(6)(A) of the Act may file a written...
Standard errors in forest area
Joseph McCollum
2002-01-01
I trace the development of standard error equations for forest area, beginning with the theory behind double sampling and the variance of a product. The discussion shifts to the particular problem of forest area - at which time the theory becomes relevant. There are subtle difficulties in figuring out which variance of a product equation should be used. The equations...
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true Payment of minimum wage specified in section 6(a)(1) of the... and Procedures § 4.2 Payment of minimum wage specified in section 6(a)(1) of the Fair Labor Standards... employees shall pay any employees engaged in such work less than the minimum wage specified in section 6(a...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-18
... Document--Draft DO-XXX, Minimum Aviation Performance Standards (MASPS) for an Enhanced Flight Vision System... Discussion (9:00 a.m.-5:00 p.m.) Provide Comment Resolution of Document--Draft DO-XXX, Minimum Aviation.../Approve FRAC Draft for PMC Consideration--Draft DO- XXX, Minimum Aviation Performance Standards (MASPS...
2013 Missouri Minimum Standards for School Buses
ERIC Educational Resources Information Center
Nicastro, Chris L.
2012-01-01
The revised minimum standards for school bus chassis and school bus bodies have been prepared in conformity with the Revised Statutes of Missouri (RSMo) for school bus transportation. The standards recommended by the 2010 National Conference on School Transportation and the Federal Motor Vehicle Safety Standards (FMVSS) promulgated by the U. S.…
Development of minimum standards for hardwoods used in producing underground coal mine timbers
Floyd G. Timson
1978-01-01
This note presents minimum standards for raw material used in the production of sawn, split, and round timbers for the underground mining industry. The standards are based on a summary of information gathered from many mine-timber producers.
Impact of HIPAA’s Minimum Necessary Standard on Genomic Data Sharing
Evans, Barbara J.; Jarvik, Gail P.
2017-01-01
Purpose This article provides a brief introduction to the HIPAA Privacy Rule’s minimum necessary standard, which applies to sharing of genomic data, particularly clinical data, following 2013 Privacy Rule revisions. Methods This research used the Thomson Reuters Westlaw™ database and law library resources in its legal analysis of the HIPAA privacy tiers and the impact of the minimum necessary standard on genomic data-sharing. We considered relevant example cases of genomic data-sharing needs. Results In a climate of stepped-up HIPAA enforcement, this standard is of concern to laboratories that generate, use, and share genomic information. How data-sharing activities are characterized—whether for research, public health, or clinical interpretation and medical practice support—affects how the minimum necessary standard applies and its overall impact on data access and use. Conclusion There is no clear regulatory guidance on how to apply HIPAA’s minimum necessary standard when considering the sharing of information in the data-rich environment of genomic testing. Laboratories that perform genomic testing should engage with policy-makers to foster sound, well-informed policies and appropriate characterization of data-sharing activities to minimize adverse impacts on day-to-day workflows. PMID:28914268
Impact of HIPAA's minimum necessary standard on genomic data sharing.
Evans, Barbara J; Jarvik, Gail P
2018-04-01
This article provides a brief introduction to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule's minimum necessary standard, which applies to sharing of genomic data, particularly clinical data, following 2013 Privacy Rule revisions. This research used the Thomson Reuters Westlaw database and law library resources in its legal analysis of the HIPAA privacy tiers and the impact of the minimum necessary standard on genomic data sharing. We considered relevant example cases of genomic data-sharing needs. In a climate of stepped-up HIPAA enforcement, this standard is of concern to laboratories that generate, use, and share genomic information. How data-sharing activities are characterized-whether for research, public health, or clinical interpretation and medical practice support-affects how the minimum necessary standard applies and its overall impact on data access and use. There is no clear regulatory guidance on how to apply HIPAA's minimum necessary standard when considering the sharing of information in the data-rich environment of genomic testing. Laboratories that perform genomic testing should engage with policy makers to foster sound, well-informed policies and appropriate characterization of data-sharing activities to minimize adverse impacts on day-to-day workflows.
van Breukelen, Gerard J P; Candel, Math J J M
2018-06-10
Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
The Need for Higher Minimum Staffing Standards in U.S. Nursing Homes
Harrington, Charlene; Schnelle, John F.; McGregor, Margaret; Simmons, Sandra F.
2016-01-01
Many U.S. nursing homes have serious quality problems, in part, because of inadequate levels of nurse staffing. This commentary focuses on two issues. First, there is a need for higher minimum nurse staffing standards for U.S. nursing homes based on multiple research studies showing a positive relationship between nursing home quality and staffing and the benefits of implementing higher minimum staffing standards. Studies have identified the minimum staffing levels necessary to provide care consistent with the federal regulations, but many U.S. facilities have dangerously low staffing. Second, the barriers to staffing reform are discussed. These include economic concerns about costs and a focus on financial incentives. The enforcement of existing staffing standards has been weak, and strong nursing home industry political opposition has limited efforts to establish higher standards. Researchers should study the ways to improve staffing standards and new payment, regulatory, and political strategies to improve nursing home staffing and quality. PMID:27103819
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Standards for determination of appropriate individual institution minimum capital ratios. 615.5351 Section 615.5351 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FUNDING AND FISCAL AFFAIRS, LOAN POLICIES AND OPERATIONS, AND FUNDING OPERATIONS Establishment of Minimum Capital...
Simulating future uncertainty to guide the selection of survey designs for long-term monitoring
Garman, Steven L.; Schweiger, E. William; Manier, Daniel J.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
A goal of environmental monitoring is to provide sound information on the status and trends of natural resources (Messer et al. 1991, Theobald et al. 2007, Fancy et al. 2009). When monitoring observations are acquired by measuring a subset of the population of interest, probability sampling as part of a well-constructed survey design provides the most reliable and legally defensible approach to achieve this goal (Cochran 1977, Olsen et al. 1999, Schreuder et al. 2004; see Chapters 2, 5, 6, 7). Previous works have described the fundamentals of sample surveys (e.g. Hansen et al. 1953, Kish 1965). Interest in survey designs and monitoring over the past 15 years has led to extensive evaluations and new developments of sample selection methods (Stevens and Olsen 2004), of strategies for allocating sample units in space and time (Urquhart et al. 1993, Overton and Stehman 1996, Urquhart and Kincaid 1999), and of estimation (Lesser and Overton 1994, Overton and Stehman 1995) and variance properties (Larsen et al. 1995, Stevens and Olsen 2003) of survey designs. Carefully planned, “scientific” (Chapter 5) survey designs have become a standard in contemporary monitoring of natural resources. Based on our experience with the long-term monitoring program of the US National Park Service (NPS; Fancy et al. 2009; Chapters 16, 22), operational survey designs tend to be selected using the following procedures. For a monitoring indicator (i.e. variable or response), a minimum detectable trend requirement is specified, based on the minimum level of change that would result in meaningful change (e.g. degradation). A probability of detecting this trend (statistical power) and an acceptable level of uncertainty (Type I error; see Chapter 2) within a specified time frame (e.g. 10 years) are specified to ensure timely detection. Explicit statements of the minimum detectable trend, the time frame for detecting the minimum trend, power, and acceptable probability of Type I error (α) collectively form the quantitative sampling objective.
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... compliance plan proposed by the applicant can reasonably be implemented and will achieve compliance as...
77 FR 60625 - Minimum Internal Control Standards for Class II Gaming
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
...-37 Minimum Internal Control Standards for Class II Gaming AGENCY: National Indian Gaming Commission... Internal Control Standards that were published on September 21, 2012. DATES: The effective date [email protected] . FOR FURTHER INFORMATION CONTACT: Jennifer Ward, Attorney, NIGC Office of General Counsel, at...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shojaei, M; Dumitru, N; Pella, S
2016-06-15
Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less
Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach
Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.
1999-01-01
Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.
Assessment of the NASA Flight Assurance Review Program
NASA Technical Reports Server (NTRS)
Holmes, J.; Pruitt, G.
1983-01-01
The NASA flight assurance review program to develop minimum standard guidelines for flight assurance reviews was assessed. Documents from NASA centers and NASA headquarters to determine current design review practices and procedures were evaluated. Six reviews were identified for the recommended minimum. The practices and procedures used at the different centers to incorporate the most effective ones into the minimum standard review guidelines were analyzed and guidelines for procedures, personnel and responsibilies, review items/data checklist, and feedback and closeout were defined. The six recommended reviews and the minimum standards guidelines developed for flight assurance reviews are presented. Observations and conclusions for further improving the NASA review and quality assurance process are outlined.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-03
... Committee 147, Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Traffic Alert and Collision Avoidance Systems Airborne Equipment. SUMMARY: The FAA is issuing this notice... Performance Standards for Traffic Alert and Collision Avoidance Systems Airborne Equipment. DATES: The meeting...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-05
... Committee 147, Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Traffic Alert and Collision Avoidance Systems Airborne Equipment. SUMMARY: The FAA is issuing this notice... Performance Standards for Traffic Alert and Collision Avoidance Systems Airborne Equipment. DATES: The meeting...
Code of Federal Regulations, 2011 CFR
2011-04-01
... or tribal organization's financial management system contain to meet these standards? 900.45 Section... ASSISTANCE ACT Standards for Tribal or Tribal Organization Management Systems Standards for Financial Management Systems § 900.45 What specific minimum requirements shall an Indian tribe or tribal organization's...
25 CFR 36.22 - Standard VII-Elementary instructional program.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Standard VII-Elementary instructional program. 36.22 Section 36.22 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.21 - Standard VI-Kindergarten instructional program.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Standard VI-Kindergarten instructional program. 36.21 Section 36.21 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
Minimum Essential Requirements and Standards in Medical Education.
ERIC Educational Resources Information Center
Wojtczak, Andrzej; Schwarz, M. Roy
2000-01-01
Reviews the definition of standards in general, and proposes a definition of standards and global minimum essential requirements for use in medical education. Aims to serve as a tool for the improvement of quality and international comparisons of basic medical programs. Explains the IIME (Institute for International Medical Education) project…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-30
... Committee 147, Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Traffic Alert and Collision Avoidance Systems Airborne Equipment. SUMMARY: The FAA is issuing this notice... Performance Standards for Traffic Alert and Collision Avoidance Systems Airborne Equipment. DATES: The meeting...
25 CFR 36.22 - Standard VII-Elementary instructional program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard VII-Elementary instructional program. 36.22 Section 36.22 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.21 - Standard VI-Kindergarten instructional program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard VI-Kindergarten instructional program. 36.21 Section 36.21 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.21 - Standard VI-Kindergarten instructional program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard VI-Kindergarten instructional program. 36.21 Section 36.21 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.22 - Standard VII-Elementary instructional program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard VII-Elementary instructional program. 36.22 Section 36.22 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 63.12 - What are minimum standards of character?
Code of Federal Regulations, 2010 CFR
2010-04-01
... PROTECTION AND FAMILY VIOLENCE PREVENTION Minimum Standards of Character and Suitability for Employment § 63... guilty to any offense under Federal, state, or tribal law involving crimes of violence, sexual assault...
Noise level in a neonatal intensive care unit in Santa Marta - Colombia.
Garrido Galindo, Angélica Patricia; Velez-Pereira, Andres M
2017-01-01
Abstract Introduction: The environment of neonatal intensive care units is influenced by numerous sources of noise emission, which contribute to raise the noise levels, and may cause hearing impairment and other physiological and psychological changes on the newborn, as well as problems with care staff. Objective: To evaluate the level and sources of noise in the neonatal intensive care unit. Methods: Sampled for 20 consecutive days every 60 seconds in A-weighting curves and fast mode with a Type I sound level meter. Recorded the average, maximum and minimum, and the 10th, 50th and 90th percentiles. The values are integrated into hours and work shift, and studied by analysis of variance. The sources were characterized in thirds of octaves. Results: The average level was 64.00 ±3.62 dB(A), with maximum of 76.04 ±5.73 dB(A), minimum of 54.84 ±2.61dB(A), and background noise of 57.95 ±2.83 dB(A). We found four sources with levels between 16.8-63.3 dB(A). Statistical analysis showed significant differences between the hours and work shift, with higher values in the early hours of the day. Conclusion: The values presented exceed the standards suggested by several organizations. The sources identified and measured recorded high values in low frequencies. PMID:29213154
Amberg, Alexander; Barrett, Dave; Beale, Michael H.; Beger, Richard; Daykin, Clare A.; Fan, Teresa W.-M.; Fiehn, Oliver; Goodacre, Royston; Griffin, Julian L.; Hankemeier, Thomas; Hardy, Nigel; Harnly, James; Higashi, Richard; Kopka, Joachim; Lane, Andrew N.; Lindon, John C.; Marriott, Philip; Nicholls, Andrew W.; Reily, Michael D.; Thaden, John J.; Viant, Mark R.
2013-01-01
There is a general consensus that supports the need for standardized reporting of metadata or information describing large-scale metabolomics and other functional genomics data sets. Reporting of standard metadata provides a biological and empirical context for the data, facilitates experimental replication, and enables the re-interrogation and comparison of data by others. Accordingly, the Metabolomics Standards Initiative is building a general consensus concerning the minimum reporting standards for metabolomics experiments of which the Chemical Analysis Working Group (CAWG) is a member of this community effort. This article proposes the minimum reporting standards related to the chemical analysis aspects of metabolomics experiments including: sample preparation, experimental analysis, quality control, metabolite identification, and data pre-processing. These minimum standards currently focus mostly upon mass spectrometry and nuclear magnetic resonance spectroscopy due to the popularity of these techniques in metabolomics. However, additional input concerning other techniques is welcomed and can be provided via the CAWG on-line discussion forum at http://msi-workgroups.sourceforge.net/ or http://Msi-workgroups-feedback@lists.sourceforge.net. Further, community input related to this document can also be provided via this electronic forum. PMID:24039616
Experimental study on an FBG strain sensor
NASA Astrophysics Data System (ADS)
Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng
2018-01-01
Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.
28 CFR 50.24 - Annuity broker minimum qualifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Annuity broker minimum qualifications. 50....24 Annuity broker minimum qualifications. (a) Minimum standards. The Civil Division, United States Department of Justice, shall establish a list of annuity brokers who meet minimum qualifications for...
Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.
Bouhrara, Mustapha; Spencer, Richard G
2018-06-01
The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Running Technique is an Important Component of Running Economy and Performance
FOLLAND, JONATHAN P.; ALLEN, SAM J.; BLACK, MATTHEW I.; HANDSAKER, JOSEPH C.; FORRESTER, STEPHANIE E.
2017-01-01
ABSTRACT Despite an intuitive relationship between technique and both running economy (RE) and performance, and the diverse techniques used by runners to achieve forward locomotion, the objective importance of overall technique and the key components therein remain to be elucidated. Purpose This study aimed to determine the relationship between individual and combined kinematic measures of technique with both RE and performance. Methods Ninety-seven endurance runners (47 females) of diverse competitive standards performed a discontinuous protocol of incremental treadmill running (4-min stages, 1-km·h−1 increments). Measurements included three-dimensional full-body kinematics, respiratory gases to determine energy cost, and velocity of lactate turn point. Five categories of kinematic measures (vertical oscillation, braking, posture, stride parameters, and lower limb angles) and locomotory energy cost (LEc) were averaged across 10–12 km·h−1 (the highest common velocity < velocity of lactate turn point). Performance was measured as season's best (SB) time converted to a sex-specific z-score. Results Numerous kinematic variables were correlated with RE and performance (LEc, 19 variables; SB time, 11 variables). Regression analysis found three variables (pelvis vertical oscillation during ground contact normalized to height, minimum knee joint angle during ground contact, and minimum horizontal pelvis velocity) explained 39% of LEc variability. In addition, four variables (minimum horizontal pelvis velocity, shank touchdown angle, duty factor, and trunk forward lean) combined to explain 31% of the variability in performance (SB time). Conclusions This study provides novel and robust evidence that technique explains a substantial proportion of the variance in RE and performance. We recommend that runners and coaches are attentive to specific aspects of stride parameters and lower limb angles in part to optimize pelvis movement, and ultimately enhance performance. PMID:28263283
[Minimum Standards for the Spatial Accessibility of Primary Care: A Systematic Review].
Voigtländer, S; Deiters, T
2015-12-01
Regional disparities of access to primary care are substantial in Germany, especially in terms of spatial accessibility. However, there is no legally or generally binding minimum standard for the spatial accessibility effort that is still acceptable. Our objective is to analyse existing minimum standards, the methods used as well as their empirical basis. A systematic literature review was undertaken of publications regarding minimum standards for the spatial accessibility of primary care based on a title word and keyword search using PubMed, SSCI/Web of Science, EMBASE and Cochrane Library. 8 minimum standards from the USA, Germany and Austria could be identified. All of them specify the acceptable spatial accessibility effort in terms of travel time; almost half include also distance(s). The travel time maximum, which is acceptable, is 30 min and it tends to be lower in urban areas. Primary care is, according to the identified minimum standards, part of the local area (Nahbereich) of so-called central places (Zentrale Orte) providing basic goods and services. The consideration of means of transport, e. g. public transport, is heterogeneous. The standards are based on empirical studies, consultation with service providers, practical experiences, and regional planning/central place theory as well as on legal or political regulations. The identified minimum standards provide important insights into the effort that is still acceptable regarding spatial accessibility, i. e. travel time, distance and means of transport. It seems reasonable to complement the current planning system for outpatient care, which is based on provider-to-population ratios, by a gravity-model method to identify places as well as populations with insufficient spatial accessibility. Due to a lack of a common minimum standard we propose - subject to further discussion - to begin with a threshold based on the spatial accessibility limit of the local area, i. e. 30 min to the next primary care provider for at least 90% of the regional population. The exceeding of the threshold would necessitate a discussion of a health care deficit and in line with this a potential need for intervention, e. g. in terms of alternative forms of health care provision. © Georg Thieme Verlag KG Stuttgart · New York.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
...-2502, Retaining and Flood Walls, 29 September 1989. h. Engineer Technical Letter (ETL) 1110-2-575... vegetation variance request. h. All final documentation for the vegetation variance request shall be uploaded..., especially for I-walls of concern as identified per Paragraph 3.h. For floodwalls, the landside and waterside...
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
flowVS: channel-specific variance stabilization in flow cytometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Rajwa, Bartek; Pothen, Alex
Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.
flowVS: channel-specific variance stabilization in flow cytometry
Azad, Ariful; Rajwa, Bartek; Pothen, Alex
2016-07-28
Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-14
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-01
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-04
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-02
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the..., ``Certification of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-10
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-14
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Quinn, TA; Granite, S; Allessie, MA; Antzelevitch, C; Bollensdorff, C; Bub, G; Burton, RAB; Cerbai, E; Chen, PS; Delmar, M; DiFrancesco, D; Earm, YE; Efimov, IR; Egger, M; Entcheva, E; Fink, M; Fischmeister, R; Franz, MR; Garny, A; Giles, WR; Hannes, T; Harding, SE; Hunter, PJ; Iribe, G; Jalife, J; Johnson, CR; Kass, RS; Kodama, I; Koren, G; Lord, P; Markhasin, VS; Matsuoka, S; McCulloch, AD; Mirams, GR; Morley, GE; Nattel, S; Noble, D; Olesen, SP; Panfilov, AV; Trayanova, NA; Ravens, U; Richard, S; Rosenbaum, DS; Rudy, Y; Sachs, F; Sachse, FB; Saint, DA; Schotten, U; Solovyova, O; Taggart, P; Tung, L; Varró, A; Volders, PG; Wang, K; Weiss, JN; Wettwer, E; White, E; Wilders, R; Winslow, RL; Kohl, P
2011-01-01
Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work. PMID:21745496
MINIMUM STANDARDS FOR APPROVAL OF GUIDANCE PROGRAMS IN SMALL HIGH SCHOOLS.
ERIC Educational Resources Information Center
New Mexico State Dept. of Education, Santa Fe.
A SMALL HIGH SCHOOL IS DEFINED AS ONE WITH AN ENROLLMENT OF 150 STUDENTS OR LESS IN GRADES 7-12 OR IN GRADES 9-12. THE MINIMUM GUIDANCE PROGRAM STANDARDS FOR SMALL HIGH SCHOOLS, AS PRESCRIBED BY THE NEW MEXICO DEPARTMENT OF EDUCATION, INCLUDE THE FOLLOWING REQUIREMENTS--(1) ONE PERSON WITH A MINIMUM OF SIX SEMESTER HOURS IN GUIDANCE MUST BE…
NASA Astrophysics Data System (ADS)
Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha
2017-07-01
After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-14
... August 31, 2012. Ray Towles, Deputy Director, Flight Standards Service. Adoption of the Amendment... Muni, RNAV (GPS) RWY 36, Amdt 1 Mountain View, CA, Moffett Federal Airfield, Takeoff Minimums and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-16
... agents and toxins list; whether minimum standards for personnel reliability, physical and cyber security... toxins list; (3) whether minimum standards for personnel reliability, physical and cyber security should...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... CONTACT: Richard A. Dunham III, Flight Procedure Standards Branch (AFS-420), Flight Technologies and... 2012 Red Cloud, NE., Red Cloud Muni, Takeoff Minimums and Obstacle DP, Orig Effective 23 AUGUST 2012...
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
Procedures for estimating confidence intervals for selected method performance parameters.
McClure, F D; Lee, J K
2001-01-01
Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.
Impact of the reduced vertical separation minimum on the domestic United States
DOT National Transportation Integrated Search
2009-01-31
Aviation regulatory bodies have enacted the reduced vertical separation minimum standard over most of the globe. The reduced vertical separation minimum is a technique that reduces the minimum vertical separation distance between aircraft from 2000 t...
Carpenter, Iain; Perry, Michelle; Challis, David; Hope, Kevin
2003-05-01
to determine if a combination of Minimum Data Set/Resident Assessment Instrument (MDS/RAI) assessment variables and the Resource Utilisation Groups version III (RUG-III) case-mix system could be used as a method of identifying and reimbursing registered nursing care needs in long-term care. the sample included 193 nursing home residents from four nursing homes from three different locations and care providers in England. The study included assessments of residents' care needs using either the MDS/RAI assessments or RUG stand-alone questionnaires and a time study that recorded the amount of nursing time received by residents over a 24-h period. Validity of RUG-III for explaining the distribution of care time between residents in different RUG-III groups was tested. The difference in direct and indirect care provided by registered general nurses (RGN) and care assistants (CA) to residents in RUG-III clinical groups was compared. the RUG-III system explained 56% of the variance in care time (Eta2, P=0.0001). Residents in RUG-III groups associated with particular medical and nursing needs (enhanced RGN care) received more than twice as much indirect RGN care time (t-test, P<0.001) and 1.4 times as much direct RGN and direct CA time (t-test, P<0.01) than residents with primarily cognitive impairment or physical problems only (standard RGN care). Residents with enhanced RGN care received an average of 48.1 min of RGN care in 24 h (95% CI 4.1-55.2) compared with an average of 31.1 min (95% CI 26.8-35.5) for residents in the standard RGN care group. A third low RGN care group was created following publication of the Department of Health guidance on NHS Funded Nursing Care. With three levels, the enhanced care group receives about 38% more than the standard group, and the low group receives about 50% of the standard group. the RUG-III system effectively differentiated between nursing home residents who are receiving 'low', 'standard' and 'enhanced' RGN care time. The findings could provide the basis of a reimbursement system for registered nursing time in long-term care facilities in the UK.
50 CFR 648.50 - Shell-height standard.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Shell-height standard. 648.50 Section 648... Atlantic Sea Scallop Fishery § 648.50 Shell-height standard. (a) Minimum shell height. The minimum shell height for in-shell scallops that may be landed, or possessed at or after landing, is 3.5 inches (8.9 cm...
50 CFR 648.50 - Shell-height standard.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Shell-height standard. 648.50 Section 648... Atlantic Sea Scallop Fishery § 648.50 Shell-height standard. (a) Minimum shell height. The minimum shell height for in-shell scallops that may be landed, or possessed at or after landing, is 3.5 inches (8.9 cm...
Code of Federal Regulations, 2012 CFR
2012-04-01
... Design Specification for Wood Construction—1991. American National Standards Institute, 11 West 42nd... Street, New York, NY 10017. ASCE 7-88 Minimum Design Loads for Buildings and Other Structures (Formerly... Institute Building, College Park, MD 20740 Telephone (301) 277-4258. MSI-1-81 Thickness Design—Asphalt...
Code of Federal Regulations, 2011 CFR
2011-04-01
... Design Specification for Wood Construction—1991. American National Standards Institute, 11 West 42nd... Street, New York, NY 10017. ASCE 7-88 Minimum Design Loads for Buildings and Other Structures (Formerly... Institute Building, College Park, MD 20740 Telephone (301) 277-4258. MSI-1-81 Thickness Design—Asphalt...
Code of Federal Regulations, 2013 CFR
2013-04-01
... Design Specification for Wood Construction—1991. American National Standards Institute, 11 West 42nd... Street, New York, NY 10017. ASCE 7-88 Minimum Design Loads for Buildings and Other Structures (Formerly... Institute Building, College Park, MD 20740 Telephone (301) 277-4258. MSI-1-81 Thickness Design—Asphalt...
Code of Federal Regulations, 2014 CFR
2014-04-01
... Design Specification for Wood Construction—1991. American National Standards Institute, 11 West 42nd... Street, New York, NY 10017. ASCE 7-88 Minimum Design Loads for Buildings and Other Structures (Formerly... Institute Building, College Park, MD 20740 Telephone (301) 277-4258. MSI-1-81 Thickness Design—Asphalt...
Code of Federal Regulations, 2010 CFR
2010-04-01
... Design Specification for Wood Construction—1991. American National Standards Institute, 11 West 42nd... Street, New York, NY 10017. ASCE 7-88 Minimum Design Loads for Buildings and Other Structures (Formerly... Institute Building, College Park, MD 20740 Telephone (301) 277-4258. MSI-1-81 Thickness Design—Asphalt...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
The minimum follow-up required for radial head arthroplasty: a meta-analysis.
Laumonerie, P; Reina, N; Kerezoudis, P; Declaux, S; Tibbo, M E; Bonnevialle, N; Mansat, P
2017-12-01
The primary aim of this study was to define the standard minimum follow-up required to produce a reliable estimate of the rate of re-operation after radial head arthroplasty (RHA). The secondary objective was to define the leading reasons for re-operation. Four electronic databases, between January 2000 and March 2017 were searched. Articles reporting reasons for re-operation (Group I) and results (Group II) after RHA were included. In Group I, a meta-analysis was performed to obtain the standard minimum follow-up, the mean time to re-operation and the reason for failure. In Group II, the minimum follow-up for each study was compared with the standard minimum follow-up. A total of 40 studies were analysed: three were Group I and included 80 implants and 37 were Group II and included 1192 implants. In Group I, the mean time to re-operation was 1.37 years (0 to 11.25), the standard minimum follow-up was 3.25 years; painful loosening was the main indication for re-operation. In Group II, 33 Group II articles (89.2%) reported a minimum follow-up of < 3.25 years. The literature does not provide a reliable estimate of the rate of re-operation after RHA. The reproducibility of results would be improved by using a minimum follow-up of three years combined with a consensus of the definition of the reasons for failure after RHA. Cite this article: Bone Joint J 2017;99-B:1561-70. ©2017 The British Editorial Society of Bone & Joint Surgery.
DOT National Transportation Integrated Search
1997-02-01
This report contains a summary of the work performed during the development of a minimum performance standard for lavatory trash receptacle automatic fire extinguishers. The developmental work was performed under the direction of the International Ha...
Rules and Regulations: Minimum Schoolhouse Construction Standards.
ERIC Educational Resources Information Center
Arkansas State Dept. of Education, Little Rock.
Regulatory guidelines governing the minimum schoolhouse construction standards as well as rules for new construction applications, school site selection, and approval procedures are presented. Appendices (comprising 95 percent of the publication) document the following: educational space guidelines; planning for modern education; school…
Variance analysis refines overhead cost control.
Cooper, J C; Suver, J D
1992-02-01
Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.
2012-09-01
by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B
ERIC Educational Resources Information Center
Johnson, Jim
2017-01-01
A growing number of U.S. business schools now offer an undergraduate degree in international business (IB), for which training in a foreign language is a requirement. However, there appears to be considerable variance in the minimum requirements for foreign language training across U.S. business schools, including the provision of…
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
NASA Astrophysics Data System (ADS)
Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping
2018-02-01
In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Do Formal Inspections Ensure that British Zoos Meet and Improve on Minimum Animal Welfare Standards?
Draper, Chris; Browne, William; Harris, Stephen
2013-01-01
Simple Summary Key aims of the formal inspections of British zoos are to assess compliance with minimum standards of animal welfare and promote improvements in animal care and husbandry. We compared reports from two consecutive inspections of 136 British zoos to see whether these goals were being achieved. Most zoos did not meet all the minimum animal welfare standards and there was no clear evidence of improving levels of compliance with standards associated with the Zoo Licensing Act 1981. The current system of licensing and inspection does not ensure that British zoos meet and maintain, let alone exceed, the minimum animal welfare standards. Abstract We analysed two consecutive inspection reports for each of 136 British zoos made by government-appointed inspectors between 2005 and 2011 to assess how well British zoos were complying with minimum animal welfare standards; median interval between inspections was 1,107 days. There was no conclusive evidence for overall improvements in the levels of compliance by British zoos. Having the same zoo inspector at both inspections affected the outcome of an inspection; animal welfare criteria were more likely to be assessed as unchanged if the same inspector was present on both inspections. This, and erratic decisions as to whether a criterion applied to a particular zoo, suggest inconsistency in assessments between inspectors. Zoos that were members of a professional association (BIAZA) did not differ significantly from non-members in the overall number of criteria assessed as substandard at the second inspection but were more likely to meet the standards on both inspections and less likely to have criteria remaining substandard. Lack of consistency between inspectors, and the high proportion of zoos failing to meet minimum animal welfare standards nearly thirty years after the Zoo Licensing Act came into force, suggest that the current system of licensing and inspection is not meeting key objectives and requires revision. PMID:26479752
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT...
Code of Federal Regulations, 2010 CFR
2010-04-01
... WITH THE PLAY OF CLASS II GAMES § 547.15 What are the minimum technical standards for electronic data...) Player tracking information; (8) Download Packages; and (9) Any information that affects game outcome. (b...
Code of Federal Regulations, 2012 CFR
2012-04-01
... WITH THE PLAY OF CLASS II GAMES § 547.15 What are the minimum technical standards for electronic data...) Player tracking information; (8) Download Packages; and (9) Any information that affects game outcome. (b...
Code of Federal Regulations, 2011 CFR
2011-04-01
... WITH THE PLAY OF CLASS II GAMES § 547.15 What are the minimum technical standards for electronic data...) Player tracking information; (8) Download Packages; and (9) Any information that affects game outcome. (b...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of [[Page 16814
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... Minimums & Obstacle DP, Orig-A Red Oak, IA, Red Oak Muni, Takeoff Minimums & Obstacle DP, Amdt 3 Storm Lake, IA, Storm Lake Muni, Takeoff Minimums & Obstacle DP, Orig Dwight, IL, Dwight, Takeoff Minimums..., Florence Rgnl, RNAV (GPS) RWY 9, Orig-A Spearfish, SD, Black Hills-Clydes Ice Field, RNAV (GPS) RWY 13...
12 CFR 3.10 - Minimum capital requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Minimum capital requirements. 3.10 Section 3.10 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY CAPITAL ADEQUACY STANDARDS Capital Ratio Requirements and Buffers § 3.10 Minimum capital requirements. (a) Minimum capital...
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi
2002-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to magnetic discontinuities in PBSs. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
NASA Technical Reports Server (NTRS)
Yamauchi, Y.; Suess, Steven T.; Sakurai, T.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to discontinuities. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
NASA Astrophysics Data System (ADS)
Goeritno, Arief; Rasiman, Syofyan
2017-06-01
Performance examination of the bulk oil circuit breaker that is influenced by its parameters at the Substation of Bogor Baru (the State Electricity Company = PLN) has been done. It is found that (1) dielectric strength of oil still qualifies as an insulating and cooling medium, because the average value of the measurement result is still above the minimum value allowed, where the minimum limit of 80 kV/2.5 cm or 32 kV/cm; (2) the simultaneity of the CB's contacts is still eligible, so that the BOCB can still be operated, because the difference of time between the highest and lowest values when the BOCB's contacts are opened/closed are less than (Δt<) 10 milliseconds (if meeting the PLN standards as recommended by Alsthom); and (3) the parameter of resistance according to the standards, where (i) the resistance of insulation has a value far above the allowed threshold, while the minimum standards are above 2,000 Mn (if meeting the ANSI standards) or on the value of 2,000 MΩ (if meeting PLN standards), (ii) the resistance of contacts has a value far above the allowed threshold, while the minimum standards are below 350 µΩ (if meeting ANSI standards) or on the value of 200 µΩ (if meeting PLN standards). The resistance of grounding is equal to the maximum limit specified, while the maximum standard is on the value of 0.5 Ω (if meeting PLN standard).
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Variance computations for functional of absolute risk estimates
Pfeiffer, R.M.; Petracci, E.
2011-01-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476
25 CFR 547.11 - What are the minimum technical standards for money and credit handling?
Code of Federal Regulations, 2010 CFR
2010-04-01
... GAMES § 547.11 What are the minimum technical standards for money and credit handling? This section... interface is: (i) Involved in the play of a game; (ii) In audit mode, recall mode or any test mode; (iii...
25 CFR 547.11 - What are the minimum technical standards for money and credit handling?
Code of Federal Regulations, 2012 CFR
2012-04-01
... GAMES § 547.11 What are the minimum technical standards for money and credit handling? This section... interface is: (i) Involved in the play of a game; (ii) In audit mode, recall mode or any test mode; (iii...
25 CFR 547.11 - What are the minimum technical standards for money and credit handling?
Code of Federal Regulations, 2011 CFR
2011-04-01
... GAMES § 547.11 What are the minimum technical standards for money and credit handling? This section... interface is: (i) Involved in the play of a game; (ii) In audit mode, recall mode or any test mode; (iii...
Proposed English Standards Promote Aviation Safety.
ERIC Educational Resources Information Center
Chatham, Robert L.; Thomas, Shelley
2000-01-01
Discusses the International Civil Aviation Organization's (ICAO) Air Navigation's Commission approval of a task to develop minimum skill level requirements in English for air traffic control. The ICAO collaborated with the Defense Language Institute English Language Center to propose a minimum standard for English proficiency for international…
25 CFR 63.12 - What are minimum standards of character?
Code of Federal Regulations, 2013 CFR
2013-04-01
... guilty to any offense under Federal, state, or tribal law involving crimes of violence, sexual assault, sexual molestation, sexual exploitation, sexual contact or prostitution, or crimes against persons. ... PROTECTION AND FAMILY VIOLENCE PREVENTION Minimum Standards of Character and Suitability for Employment § 63...
25 CFR 63.12 - What are minimum standards of character?
Code of Federal Regulations, 2014 CFR
2014-04-01
... guilty to any offense under Federal, state, or tribal law involving crimes of violence, sexual assault, sexual molestation, sexual exploitation, sexual contact or prostitution, or crimes against persons. ... PROTECTION AND FAMILY VIOLENCE PREVENTION Minimum Standards of Character and Suitability for Employment § 63...
25 CFR 63.12 - What are minimum standards of character?
Code of Federal Regulations, 2011 CFR
2011-04-01
... guilty to any offense under Federal, state, or tribal law involving crimes of violence, sexual assault, sexual molestation, sexual exploitation, sexual contact or prostitution, or crimes against persons. ... PROTECTION AND FAMILY VIOLENCE PREVENTION Minimum Standards of Character and Suitability for Employment § 63...
25 CFR 63.12 - What are minimum standards of character?
Code of Federal Regulations, 2012 CFR
2012-04-01
... guilty to any offense under Federal, state, or tribal law involving crimes of violence, sexual assault, sexual molestation, sexual exploitation, sexual contact or prostitution, or crimes against persons. ... PROTECTION AND FAMILY VIOLENCE PREVENTION Minimum Standards of Character and Suitability for Employment § 63...
Current Status of Multidisciplinary Care in Psoriatic Arthritis in Spain: NEXUS 2.0 Project.
Queiro, Rubén; Coto, Pablo; Joven, Beatriz; Rivera, Raquel; Navío Marco, Teresa; de la Cueva, Pablo; Alvarez Vega, Jose Luis; Narváez Moreno, Basilio; Rodriguez Martínez, Fernando José; Pardo Sánchez, José; Feced Olmos, Carlos; Pujol, Conrad; Rodríguez, Jesús; Notario, Jaume; Pujol Busquets, Manel; García Font, Mercè; Galindez, Eva; Pérez Barrio, Silvia; Urruticoechea-Arana, Ana; Hergueta, Merce; López Montilla, M Dolores; Vélez García-Nieto, Antonio; Maceiras, Francisco; Rodríguez Pazos, Laura; Rubio Romero, Esteban; Rodríguez Fernandez Freire, Lourdes; Luelmo, Jesús; Gratacós, Jordi
2018-02-26
1) To analyze the implementation of multidisciplinary care models in psoriatic arthritis (PsA) patients, 2) To define minimum and excellent standards of care. A survey was sent to clinicians who already performed multidisciplinary care or were in the process of undertaking it, asking: 1) Type of multidisciplinary care model implemented; 2) Degree, priority and feasibility of the implementation of quality standards in the structure, process and result for care. In 6 regional meetings the results of the survey were presented and discussed, and the ultimate priority of quality standards for care was defined. At a nominal meeting group, 11 experts (rheumatologists and dermatologists) analyzed the results of the survey and the regional meetings. With this information, they defined which standards of care are currently considered as minimum and which are excellent. The simultaneous and parallel models of multidisciplinary care are those most widely implemented, but the implementation of quality standards is highly variable. In terms of structure it ranges from 22% to 74%, in those related to process from 17% to 54% and in the results from 2% to 28%. Of the 25 original quality standards for care, 9 were considered only minimum, 4 were excellent and 12 defined criteria for minimum level and others for excellence. The definition of minimum and excellent quality standards for care will help achieve the goal of multidisciplinary care for patients with PAs, which is the best healthcare possible. Copyright © 2018 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...
75 FR 6364 - Process for Requesting a Variance From Vegetation Standards for Levees and Floodwalls
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-09
..., channels, or shore- line or river-bank protection systems such as revetments, sand dunes, and barrier...) toe (subject to preexisting right-of-way). f. The vegetation variance process is not a mechanism to...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...
Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2018-01-01
Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. Results The AL methods produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p = 0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275 to 0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers’ different models during the training phase, compared to the variance of the induced models’ AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods. The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p = 0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p = 0.29), as was the difference between the Combination_XA and Exploitation methods (p = 0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p = 0.014), but not when using any of the three AL methods. Conclusions The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group’s individual labelers. Finally, using the AL methods when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. PMID:28456512
Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2017-09-01
Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. The AL methods: produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p=0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275-0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers' different models during the training phase, compared to the variance of the induced models' AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p=0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p=0.29), as was the difference between the Combination_XA and Exploitation methods (p=0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p=0.014), but not when using any of the three AL methods. The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group's individual labelers. Finally, using the AL methods: when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. Copyright © 2017 Elsevier B.V. All rights reserved.
Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field
NASA Technical Reports Server (NTRS)
Ghosh, Sanjoy; Roberts, D. Aaron
2010-01-01
We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
Young, Helen; Taylor, Anna; Way, Sally-Anne; Leaning, Jennifer
2004-06-01
This article examines the recent revision of the Sphere Minimum Standards in disaster response relating to food security, nutrition and food aid. It describes how the revision attempted to incorporate the principles of the Humanitarian Charter, as well as relevant human rights principles and values into the Sphere Minimum Standards. The initial aim of the revision was to ensure that the Sphere Minimum Standards better reflected the principles embodied in the Humanitarian Charter. This was later broadened to ensure that key legal standards and principles from human rights and humanitarian law were considered and also incorporated, in part to fill the "protection gap" within the existing standards. In relation to the food security, nutrition and food aid standards, it was agreed by participants in the process that the human right to adequate food and freedom from hunger should be incorporated. In relation to more general principles underlying the Humanitarian Charter, itself drawn largely from human rights and humanitarian law, it was agreed that there was a need to strengthen "protection" elements within the standards and a need to incorporate the basic principles of the right to life with dignity, non-discrimination, impartiality and participation, as well as to explore the relevance of the concept of the progressive realisation of the right to food. The questions raised in linking rights to operational standards required thought, on the one hand, about whether the technical standards reflected a deep understanding of the values expressed within the legal instruments, and whether the existing standards were adequate in relation to those legal rights. On the other hand, it also required reflection on how operational standards like Sphere could give concrete content to human rights, such as the right to food and the right to be free from hunger. However, there remain challenges in examining what a rights-based approach will mean in terms of the role of humanitarian agencies as duty-bearers of rights, given that the primary responsibility rests with state governments. It will also require reflection on the modes and mechanisms of accountability that are brought to bear in ensuring the implementation of the Minimum Standards.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Experimental demonstration of quantum teleportation of a squeezed state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takei, Nobuyuki; Aoki, Takao; Yonezawa, Hidehiro
2005-10-15
Quantum teleportation of a squeezed state is demonstrated experimentally. Due to some inevitable losses in experiments, a squeezed vacuum necessarily becomes a mixed state which is no longer a minimum uncertainty state. We establish an operational method of evaluation for quantum teleportation of such a state using fidelity and discuss the classical limit for the state. The measured fidelity for the input state is 0.85{+-}0.05, which is higher than the classical case of 0.73{+-}0.04. We also verify that the teleportation process operates properly for the nonclassical state input and its squeezed variance is certainly transferred through the process. We observemore » the smaller variance of the teleported squeezed state than that for the vacuum state input.« less
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Bassani, Diego G; Corsi, Daniel J; Gaffey, Michelle F; Barros, Aluisio J D
2014-01-01
Worse health outcomes including higher morbidity and mortality are most often observed among the poorest fractions of a population. In this paper we present and validate national, regional and state-level distributions of national wealth index scores, for urban and rural populations, derived from household asset data collected in six survey rounds in India between 1992-3 and 2007-8. These new indices and their sub-national distributions allow for comparative analyses of a standardized measure of wealth across time and at various levels of population aggregation in India. Indices were derived through principal components analysis (PCA) performed using standardized variables from a correlation matrix to minimize differences in variance. Valid and simple indices were constructed with the minimum number of assets needed to produce scores with enough variability to allow definition of unique decile cut-off points in each urban and rural area of all states. For all indices, the first PCA components explained between 36% and 43% of the variance in household assets. Using sub-national distributions of national wealth index scores, mean height-for-age z-scores increased from the poorest to the richest wealth quintiles for all surveys, and stunting prevalence was higher among the poorest and lower among the wealthiest. Urban and rural decile cut-off values for India, for the six regions and for the 24 major states revealed large variability in wealth by geographical area and level, and rural wealth score gaps exceeded those observed in urban areas. The large variability in sub-national distributions of national wealth index scores indicates the importance of accounting for such variation when constructing wealth indices and deriving score distribution cut-off points. Such an approach allows for proper within-sample economic classification, resulting in scores that are valid indicators of wealth and correlate well with health outcomes, and enables wealth-related analyses at whichever geographical area and level may be most informative for policy-making processes.
Precision gravimetric survey at the conditions of urban agglomerations
NASA Astrophysics Data System (ADS)
Sokolova, Tatiana; Lygin, Ivan; Fadeev, Alexander
2014-05-01
Large cities growth and aging lead to the irreversible negative changes of underground. The study of these changes at the urban area mainly based on the shallow methods of Geophysics, which extensive usage restricted by technogenic noise. Among others, precision gravimetry is allocated as method with good resistance to the urban noises. The main the objects of urban gravimetric survey are the soil decompaction, leaded to the rocks strength violation and the karst formation. Their gravity effects are too small, therefore investigation requires the modern high-precision equipment and special methods of measurements. The Gravimetry division of Lomonosov Moscow State University examin of modern precision gravimeters Scintrex CG-5 Autograv since 2006. The main performance characteristics of over 20 precision gravimeters were examined in various operational modes. Stationary mode. Long-term gravimetric measurements were carried at a base station. It shows that records obtained differ by high-frequency and mid-frequency (period 5 - 12 hours) components. The high-frequency component, determined as a standard deviation of measurement, characterizes the level of the system sensitivity to external noise and varies for different devices from 2 to 5-7 μGals. Midrange component, which closely meet to the rest of nonlinearity gravimeter drifts, is partially compensated by the equipment. This factor is very important in the case of gravimetric monitoring or observations, when midrange anomalies are the target ones. For the examined gravimeters, amplitudes' deviations, associated with this parameter may reach 10 μGals. Various transportation modes - were performed by walking (softest mode), lift (vertical overload), vehicle (horizontal overloads), boat (vertical plus horizontal overloads) and helicopter. The survey quality was compared by the variance of the measurement results and internal convergence of series. The measurement results variance (from ±2 to ±4 μGals) and its internal convergence are independent on transportation mode. Actually, measurements differ just by the processing time and appropriate number of readings. Important, that the internal convergence is the individual attribute of particular device. For the investigated gravimeters it varies from ±3 up to ±8 μGals. Various stability of the gravimeters location base. The most stable basis (minimum microseisms) in this experiment was a concrete pedestal, the least stable - point on the 28th floor. There is no direct dependence of the measurement results variance at the external noise level. Moreover, the external dispersion between different gravimeters is minimal in the point of the highest microseisms. Conclusions. The quality of the modern high-precision gravimeters Scintrex CG-5 Autograv measurements is determined by stability of the particular device, its standard deviation value and the nonlinearity drift degree. Despite the fact, that mentioned parameters of the tested gravimeters, generally corresponded to the factory characters, for the surveys required accuracy ±2-5 μGals, the best gravimeters should be selected. Practical gravimetric survey with such accuracy allowed reliable determination of the position of technical communication boxes and underground walkway in the urban area, indicated by gravity minimums with the amplitudes from 6-8 μGals and 1 - 15 meters width. The holes' parameters, obtained as the result of interpretationare well aligned with priori data.
76 FR 38431 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
... Commission's minimum performance standards regarding registered transfer agents, and (2) to assure that issuers are aware of certain problems and poor performances with respect to the transfer agents that are... failure to comply with the Commission's minimum performance standards then the issuer will be unable to...
25 CFR 36.22 - Standard VII-Elementary instructional program.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) Mathematics. (3) Social studies. (4) Sciences. (5) Fine arts. (6) Physical education. (b) Each school shall... Section 36.22 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., physical education, music, etc.) which are directly related to or affect student instruction shall provide....20 Section 36.20 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY...
Setting Standards for Minimum Competency Tests.
ERIC Educational Resources Information Center
Mehrens, William A.
Some general questions about minimum competency tests are discussed, and various methods of setting standards are reviewed with major attention devoted to those methods used for dichotomizing a continuum. Methods reviewed under the heading of Absolute Judgments of Test Content include Nedelsky's, Angoff's, Ebel's, and Jaeger's. These methods are…
25 CFR 36.21 - Standard VI-Kindergarten instructional program.
Code of Federal Regulations, 2010 CFR
2010-04-01
... and creative tendencies. (5) Health education inclusive of the requirements contained in the Act of... Section 36.21 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.21 - Standard VI-Kindergarten instructional program.
Code of Federal Regulations, 2011 CFR
2011-04-01
... and creative tendencies. (5) Health education inclusive of the requirements contained in the Act of... Section 36.21 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
5 CFR 890.201 - Minimum standards for health benefits plans.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Minimum standards for health benefits plans. 890.201 Section 890.201 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.201...
5 CFR 890.202 - Minimum standards for health benefits carriers.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Minimum standards for health benefits carriers. 890.202 Section 890.202 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.202...
5 CFR 890.201 - Minimum standards for health benefits plans.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Minimum standards for health benefits plans. 890.201 Section 890.201 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.201...
5 CFR 890.202 - Minimum standards for health benefits carriers.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Minimum standards for health benefits carriers. 890.202 Section 890.202 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.202...
5 CFR 890.202 - Minimum standards for health benefits carriers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Minimum standards for health benefits carriers. 890.202 Section 890.202 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.202...
25 CFR 36.22 - Standard VII-Elementary instructional program.
Code of Federal Regulations, 2011 CFR
2011-04-01
...) Mathematics. (3) Social studies. (4) Sciences. (5) Fine arts. (6) Physical education. (b) Each school shall... Section 36.22 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A
2006-01-01
The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.
Lehrer, Paul; Karavidas, Maria; Lu, Shou-En; Vaschillo, Evgeny; Vaschillo, Bronya; Cheng, Andrew
2010-05-01
Seven professional airplane pilots participated in a one-session test in a Boeing 737-800 simulator. Mental workload for 18 flight tasks was rated by experienced test pilots (hereinafter called "expert ratings") and by study participants' self-report on NASA's Task Load Index (TLX) scale. Pilot performance was rated by a check pilot. The standard deviation of R-R intervals (SDNN) significantly added 3.7% improvement over the TLX in distinguishing high from moderate-load tasks and 2.3% improvement in distinguishing high from combined moderate and low-load tasks. Minimum RRI in the task significantly discriminated high- from medium- and low-load tasks, but did not add significant predictive variance to the TLX. The low-frequency/high-frequency (LF:HF) RRI ratio based on spectral analysis of R-R intervals, and ventricular relaxation time were each negatively related to pilot performance ratings independently of TLX values, while minimum and average RRI were positively related, showing added contribution of these cardiac measures for predicting performance. Cardiac results were not affected by controlling either for respiration rate or motor activity assessed by accelerometry. The results suggest that cardiac assessment can be a useful addition to self-report measures for determining flight task mental workload and risk for performance decrements. Replication on a larger sample is needed to confirm and extend the results. Copyright 2010 Elsevier B.V. All rights reserved.
Zharkova, V. V.; Shepherd, S. J.; Popova, E.; Zharkov, S. I.
2015-01-01
We derive two principal components (PCs) of temporal magnetic field variations over the solar cycles 21–24 from full disk magnetograms covering about 39% of data variance, with σ = 0.67. These PCs are attributed to two main magnetic waves travelling from the opposite hemispheres with close frequencies and increasing phase shift. Using symbolic regeression analysis we also derive mathematical formulae for these waves and calculate their summary curve which we show is linked to solar activity index. Extrapolation of the PCs backward for 800 years reveals the two 350-year grand cycles superimposed on 22 year-cycles with the features showing a remarkable resemblance to sunspot activity reported in the past including the Maunder and Dalton minimum. The summary curve calculated for the next millennium predicts further three grand cycles with the closest grand minimum occurring in the forthcoming cycles 26–27 with the two magnetic field waves separating into the opposite hemispheres leading to strongly reduced solar activity. These grand cycle variations are probed by α − Ω dynamo model with meridional circulation. Dynamo waves are found generated with close frequencies whose interaction leads to beating effects responsible for the grand cycles (350–400 years) superimposed on a standard 22 year cycle. This approach opens a new era in investigation and confident prediction of solar activity on a millenium timescale. PMID:26511513
Zeng, Xing; Chen, Cheng; Wang, Yuanyuan
2012-12-01
In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... barrels per day; and (C) For 1997, 1,445,000 barrels per day. (ii) Maximum extent practicable means that... (thermocouples, pressure transducers, continuous emissions monitors (CEMS), etc.) variances. (i) The plan shall... monitoring equipment functions properly and variances of the control equipment and monitoring equipment are...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
The number of genomes from uncultivated microbes will soon surpass the number of isolate genomes in public databases (Hugenholtz, Skarshewski, & Parks, 2016). Technological advancements in high-throughput sequencing and assembly, including single-cell genomics and the computational extraction of genomes from metagenomes (GFMs), are largely responsible. Here we propose community standards for reporting the Minimum Information about a Single-Cell Genome (MIxS-SCG) and Minimum Information about Genomes extracted From Metagenomes (MIxS-GFM) specific for Bacteria and Archaea. The standards have been developed in the context of the International Genomics Standards Consortium (GSC) community (Field et al., 2014) and can be viewed as amore » supplement to other GSC checklists including the Minimum Information about a Genome Sequence (MIGS), Minimum information about a Metagenomic Sequence(s) (MIMS) (Field et al., 2008) and Minimum Information about a Marker Gene Sequence (MIMARKS) (P. Yilmaz et al., 2011). Community-wide acceptance of MIxS-SCG and MIxS-GFM for Bacteria and Archaea will enable broad comparative analyses of genomes from the majority of taxa that remain uncultivated, improving our understanding of microbial function, ecology, and evolution.« less
NASA Technical Reports Server (NTRS)
Fern, Lisa Carolynn
2017-01-01
The primary activity for the UAS-NAS Human Systems Integration (HSI) sub-project in Phase 1 was support of RTCA Special Committee 228 Minimum Operational Performance Standards (MOPS). We provide data on the effect of various Detect and Avoid (DAA) display features with respect to pilot performance of the remain well clear function in order to determine the minimum requirements for DAA displays.
75 FR 55269 - Minimum Internal Control Standards for Class II Gaming
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-10
... DEPARTMENT OF THE INTERIOR National Indian Gaming Commission 25 CFR Parts 542 and 543 RIN 3141-AA-37 Minimum Internal Control Standards for Class II Gaming AGENCY: National Indian Gaming Commission. ACTION: Delay of effective date of final rule; request for comments. SUMMARY: The National Indian Gaming...
47 CFR 64.706 - Minimum standards for the routing and handling of emergency telephone calls.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of emergency telephone calls. 64.706 Section 64.706 Telecommunication FEDERAL COMMUNICATIONS... Operator Services § 64.706 Minimum standards for the routing and handling of emergency telephone calls. Upon receipt of any emergency telephone call, providers of operator services and aggregators shall...
25 CFR 36.1 - Purpose, scope, and information collection requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... 36.1 Section 36.1 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY... of this rule is to establish minimum academic standards for the basic education of Indian children...
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Div. of Elementary and Secondary Education.
Minimum standards for elementary and secondary schools require that courses of study provide for the following topics: citizenship, human relations education, and multicultural education. It is educationally profitable to view these as three interdependent perspectives which shape a citizen's participation in democratic life. Cultivating these…
38 CFR 17.155 - Minimum standards of safety and quality for automotive adaptive equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... safety and quality for automotive adaptive equipment. 17.155 Section 17.155 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Automotive Equipment and Driver Training § 17.155 Minimum standards of safety and quality for automotive adaptive equipment. (a) The Under Secretary for...
38 CFR 17.155 - Minimum standards of safety and quality for automotive adaptive equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... safety and quality for automotive adaptive equipment. 17.155 Section 17.155 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Automotive Equipment and Driver Training § 17.155 Minimum standards of safety and quality for automotive adaptive equipment. (a) The Under Secretary for...
38 CFR 17.155 - Minimum standards of safety and quality for automotive adaptive equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... safety and quality for automotive adaptive equipment. 17.155 Section 17.155 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Automotive Equipment and Driver Training § 17.155 Minimum standards of safety and quality for automotive adaptive equipment. (a) The Under Secretary for...
38 CFR 17.155 - Minimum standards of safety and quality for automotive adaptive equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... safety and quality for automotive adaptive equipment. 17.155 Section 17.155 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Automotive Equipment and Driver Training § 17.155 Minimum standards of safety and quality for automotive adaptive equipment. (a) The Under Secretary for...
25 CFR 547.13 - What are the minimum technical standards for program storage media?
Code of Federal Regulations, 2013 CFR
2013-04-01
... storage media? 547.13 Section 547.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR... the minimum technical standards for program storage media? (a) Removable program storage media. All removable program storage media must maintain an internal checksum or signature of its contents...
25 CFR 547.13 - What are the minimum technical standards for program storage media?
Code of Federal Regulations, 2014 CFR
2014-04-01
... storage media? 547.13 Section 547.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR... the minimum technical standards for program storage media? (a) Removable program storage media. All removable program storage media must maintain an internal checksum or signature of its contents...
12 CFR 615.5205 - Minimum permanent capital standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Minimum permanent capital standards. 615.5205 Section 615.5205 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FUNDING AND FISCAL... least 7 percent of its risk-adjusted asset base. [62 FR 4446, Jan. 30, 1997] ...
Establishing Proficiency Standards for High School Graduation.
ERIC Educational Resources Information Center
Herron, Marshall D.
The Oregon State Board of Education has rejected the use of cut-off scores on a proficiency test to establish minimum performance standards for high school graduation. Instead, each school district is required to specify--by local board adoption--minimum competencies in reading, writing, listening, speaking, analyzing, and computing. These…
General Requirements and Minimum Standards.
ERIC Educational Resources Information Center
2003
This publication provides the General Requirements and Minimum Standards developed by the National Court Reporters Association's Council on Approved Student Education (CASE). They are the same for all court reporter education programs, whether an institution is applying for approval for the first time or for a new grant of approval. The first…
Code of Federal Regulations, 2013 CFR
2013-04-01
... data communications between system components? 547.15 Section 547.15 Indians NATIONAL INDIAN GAMING... AND EQUIPMENT § 547.15 What are the minimum technical standards for electronic data communications between system components? (a) Sensitive data. Communication of sensitive data must be secure from...
Code of Federal Regulations, 2014 CFR
2014-04-01
... data communications between system components? 547.15 Section 547.15 Indians NATIONAL INDIAN GAMING... AND EQUIPMENT § 547.15 What are the minimum technical standards for electronic data communications between system components? (a) Sensitive data. Communication of sensitive data must be secure from...
25 CFR 36.31 - Standard XI-Student promotion requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard XI-Student promotion requirements. 36.31 Section 36.31 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC... programs, in a minimum of 160 instructional days per academic term or 80 instructional days per semester...
25 CFR 36.31 - Standard XI-Student promotion requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard XI-Student promotion requirements. 36.31 Section 36.31 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC... programs, in a minimum of 160 instructional days per academic term or 80 instructional days per semester...
29 CFR 4.159 - General minimum wage.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...
Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.
McClure, Foster D; Lee, Jung K
2006-01-01
A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.
The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems
2006-03-01
test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100
ERIC Educational Resources Information Center
Oranje, Andreas
2006-01-01
A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…
Yilmaz, Pelin; Kottmann, Renzo; Field, Dawn; Knight, Rob; Cole, James R; Amaral-Zettler, Linda; Gilbert, Jack A; Karsch-Mizrachi, Ilene; Johnston, Anjanette; Cochrane, Guy; Vaughan, Robert; Hunter, Christopher; Park, Joonhong; Morrison, Norman; Rocca-Serra, Philippe; Sterk, Peter; Arumugam, Manimozhiyan; Bailey, Mark; Baumgartner, Laura; Birren, Bruce W; Blaser, Martin J; Bonazzi, Vivien; Booth, Tim; Bork, Peer; Bushman, Frederic D; Buttigieg, Pier Luigi; Chain, Patrick S G; Charlson, Emily; Costello, Elizabeth K; Huot-Creasy, Heather; Dawyndt, Peter; DeSantis, Todd; Fierer, Noah; Fuhrman, Jed A; Gallery, Rachel E; Gevers, Dirk; Gibbs, Richard A; Gil, Inigo San; Gonzalez, Antonio; Gordon, Jeffrey I; Guralnick, Robert; Hankeln, Wolfgang; Highlander, Sarah; Hugenholtz, Philip; Jansson, Janet; Kau, Andrew L; Kelley, Scott T; Kennedy, Jerry; Knights, Dan; Koren, Omry; Kuczynski, Justin; Kyrpides, Nikos; Larsen, Robert; Lauber, Christian L; Legg, Teresa; Ley, Ruth E; Lozupone, Catherine A; Ludwig, Wolfgang; Lyons, Donna; Maguire, Eamonn; Methé, Barbara A; Meyer, Folker; Muegge, Brian; Nakielny, Sara; Nelson, Karen E; Nemergut, Diana; Neufeld, Josh D; Newbold, Lindsay K; Oliver, Anna E; Pace, Norman R; Palanisamy, Giriprakash; Peplies, Jörg; Petrosino, Joseph; Proctor, Lita; Pruesse, Elmar; Quast, Christian; Raes, Jeroen; Ratnasingham, Sujeevan; Ravel, Jacques; Relman, David A; Assunta-Sansone, Susanna; Schloss, Patrick D; Schriml, Lynn; Sinha, Rohini; Smith, Michelle I; Sodergren, Erica; Spor, Aymé; Stombaugh, Jesse; Tiedje, James M; Ward, Doyle V; Weinstock, George M; Wendel, Doug; White, Owen; Whiteley, Andrew; Wilke, Andreas; Wortman, Jennifer R; Yatsunenko, Tanya; Glöckner, Frank Oliver
2012-01-01
Here we present a standard developed by the Genomic Standards Consortium (GSC) for reporting marker gene sequences—the minimum information about a marker gene sequence (MIMARKS). We also introduce a system for describing the environment from which a biological sample originates. The ‘environmental packages’ apply to any genome sequence of known origin and can be used in combination with MIMARKS and other GSC checklists. Finally, to establish a unified standard for describing sequence data and to provide a single point of entry for the scientific community to access and learn about GSC checklists, we present the minimum information about any (x) sequence (MIxS). Adoption of MIxS will enhance our ability to analyze natural genetic diversity documented by massive DNA sequencing efforts from myriad ecosystems in our ever-changing biosphere. PMID:21552244
NASA Astrophysics Data System (ADS)
Malanson, G. P.; DeRose, R. J.; Bekker, M. F.
2016-12-01
The consequences of increasing climatic variance while including variability among individuals and populations are explored for range margins of species with a spatially explicit simulation. The model has a single environmental gradient and a single species then extended to two species. Species response to the environment is a Gaussian function with a peak of 1.0 at their peak fitness on the gradient. The variance in the environment is taken from the total variance in the tree ring series of 399 individuals of Pinus edulis in FIA plots in the western USA. The variability is increased by a multiplier of the standard deviation for various doubling times. The variance of individuals in the simulation is drawn from these same series. Inheritance of individual variability is based on the geographic locations of the individuals. The variance for P. edulis is recomputed as time-dependent conditional standard deviations using the GARCH procedure. Establishment and mortality are simulated in a Monte Carlo process with individual variance. Variance for P. edulis does not show a consistent pattern of heteroscedasticity. An obvious result is that increasing variance has deleterious effects on species persistence because extreme events that result in extinctions cannot be balanced by positive anomalies, but even less extreme negative events cannot be balanced by positive anomalies because of biological and spatial constraints. In the two species model the superior competitor is more affected by increasing climatic variance because its response function is steeper at the point of intersection with the other species and so the uncompensated effects of negative anomalies are greater for it. These theoretical results can guide the anticipated need to mitigate the effects of increasing climatic variability on P. edulis range margins. The trailing edge, here subject to increasing drought stress with increasing temperatures, will be more affected by negative anomalies.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A
2013-09-01
Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also contributed substantially to micro-environmental sensitivity. Addition of random regressions to the mean model did not reduce heterogeneity in residual variance and that genetic heterogeneity of residual variance was not simply an effect of an incomplete mean model. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-05
... Linder Rgnl, Takeoff Minimums and Obstacle DP, Amdt 1 Venice, FL, Venice Muni, RNAV (GPS) RWY 5, Orig Venice, FL, Venice Muni, RNAV (GPS) RWY 23, Orig Venice, FL, Venice Muni, Takeoff Minimums and Obstacle...
Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years
NASA Astrophysics Data System (ADS)
Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.
2014-12-01
Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
Hygiene Factors: Using VLE Minimum Standards to Avoid Student Dissatisfaction
ERIC Educational Resources Information Center
Reed, Peter; Watmough, Simon
2015-01-01
Inconsistency in the use of Virtual Learning Environments (VLEs) has led to dissatisfaction amongst students and is an issue across the Higher Education sector. This paper outlines research undertaken in one faculty within one university to ascertain staff and student views on minimum standards within the VLE; how the VLE could reduce student…
Code of Federal Regulations, 2013 CFR
2013-04-01
... information technology and information technology data? 543.20 Section 543.20 Indians NATIONAL INDIAN GAMING... § 543.20 What are the minimum internal control standards for information technology and information... prevent the concealment of fraud. (4) Information technology agents having access to Class II gaming...
Code of Federal Regulations, 2014 CFR
2014-04-01
... information technology and information technology data? 543.20 Section 543.20 Indians NATIONAL INDIAN GAMING... § 543.20 What are the minimum internal control standards for information technology and information... prevent the concealment of fraud. (4) Information technology agents having access to Class II gaming...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2013 CFR
2013-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2012 CFR
2012-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2014 CFR
2014-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-03
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Guidelines for Federal Workplace Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were... Laboratories and Instrumented Initial Testing Facilities (IITF) must meet in order to conduct drug and specimen...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-13
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Guidelines for Federal Workplace Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were... and Instrumented Initial Testing Facilities (IITF) must meet in order to conduct drug and specimen...
Guidelines and Minimum Standards for Operation of Texas Proprietary Schools. (Revised.)
ERIC Educational Resources Information Center
Texas Education Agency, Austin. Div. of Proprietary Schools and Veterans Education.
This guide, prepared to assist owners and managers of proprietary schools in Texas in applying for and obtaining approval by the Texas Education Agency, provides guidelines and minimum standards of practice for proprietary school operation. First, the guidelines are discussed in terms of definitions, exemptions, general provisions, certificates of…
25 CFR 542.1 - What does this part cover?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false What does this part cover? 542.1 Section 542.1 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.1 What does this part cover? This part establishes the minimum internal control standards...
7 CFR 989.702 - Minimum grade standards for packed raisins.
Code of Federal Regulations, 2010 CFR
2010-01-01
... RAISINS PRODUCED FROM GRAPES GROWN IN CALIFORNIA Quality Control § 989.702 Minimum grade standards for... washed with water to assure a wholesome product. (2) Grades. (i) Marketing Order Grade A is a quality of... paragraph. (ii) Marketing Order Grade B is the quality of the Cluster Seedless raisins that have similar...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-16
... on 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex... (GPS) Y RWY 20, Amdt 1B Cambridge, MN, Cambridge Muni, Takeoff Minimums and Obstacle DP, Orig Pipestone, MN, Pipestone Muni, NDB RWY 36, Amdt 7, CANCELLED Rushford, MN, Rushford Muni, Takeoff Minimums and...
A new Method for Determining the Interplanetary Current-Sheet Local Orientation
NASA Astrophysics Data System (ADS)
Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.
2003-03-01
In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.
Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl
2016-10-01
The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.
Comparison of reproducibility of natural head position using two methods.
Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik
2012-01-01
Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.
2014-01-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180
Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E
2014-04-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Reyes, Mayra I; Pérez, Cynthia M; Negrón, Edna L
2008-03-01
Consumers increasingly use bottled water and home water treatment systems to avoid direct tap water. According to the International Bottled Water Association (IBWA), an industry trade group, 5 billion gallons of bottled water were consumed by North Americans in 2001. The principal aim of this study was to assess the microbial quality of in-house and imported bottled water for human consumption, by measurement and comparison of the concentration of bacterial endotoxin and standard cultivable methods of indicator microorganisms, specifically, heterotrophic and fecal coliform plate counts. A total of 21 brands of commercial bottled water, consisting of 10 imported and 11 in-house brands, selected at random from 96 brands that are consumed in Puerto Rico, were tested at three different time intervals. The Standard Limulus Amebocyte Lysate test, gel clot method, was used to measure the endotoxin concentrations. The minimum endotoxin concentration in 63 water samples was less than 0.0625 EU/mL, while the maximum was 32 EU/mL. The minimum bacterial count showed no growth, while the maximum was 7,500 CFU/mL. Bacterial isolates like P. fluorescens, Corynebacterium sp. J-K, S. paucimobilis, P. versicularis, A. baumannii, P. chlororaphis, F. indologenes, A. faecalis and P. cepacia were identified. Repeated measures analysis of variance demonstrated that endotoxin concentration did not change over time, while there was a statistically significant (p < 0.05) decrease in bacterial count over time. In addition, multiple linear regression analysis demonstrated that a unit change in the concentration of endotoxin across time was associated with a significant (p < 0.05) reduction in the bacteriological cell count. This analysis evidenced a significant time effect in the average log bacteriological cell count. Although bacterial growth was not detected in some water samples, endotoxin was present. Measurement of Gram-negative bacterial endotoxins is one of the methods that have been suggested as a rapid way of determining bacteriological water quality.
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
A New Look at Some Solar Wind Turbulence Puzzles
NASA Technical Reports Server (NTRS)
Roberts, Aaron
2006-01-01
Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.
Development of minimum state requirements for local growth management policies : phase 1.
DOT National Transportation Integrated Search
2015-01-01
This research entailed the development of minimum requirements for local growth management policies for use in Louisiana. The purpose of developing minimum statewide standards is to try to alleviate some of the stress placed on state and local govern...
Development of minimum state requirements for local growth management policies -- phase 1.
DOT National Transportation Integrated Search
2015-11-01
This research entailed the development of minimum requirements for local growth management policies for use : in Louisiana. The purpose of developing minimum statewide standards is to try to alleviate some of the stress : placed on state and local go...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-29
..., Takeoff Minimums and Obstacle DP, Amdt 3 Harvard, NE, Harvard State, RNAV (GPS) RWY 35, Orig Harvard, NE, Harvard State, Takeoff Minimums and Obstacle DP, Orig Harvard, NE, Harvard State, VOR/DME RNAV OR GPS RWY...
Wavefront aberrations of x-ray dynamical diffraction beams.
Liao, Keliang; Hong, Youli; Sheng, Weifan
2014-10-01
The effects of dynamical diffraction in x-ray diffractive optics with large numerical aperture render the wavefront aberrations difficult to describe using the aberration polynomials, yet knowledge of them plays an important role in a vast variety of scientific problems ranging from optical testing to adaptive optics. Although the diffraction theory of optical aberrations was established decades ago, its application in the area of x-ray dynamical diffraction theory (DDT) is still lacking. Here, we conduct a theoretical study on the aberration properties of x-ray dynamical diffraction beams. By treating the modulus of the complex envelope as the amplitude weight function in the orthogonalization procedure, we generalize the nonrecursive matrix method for the determination of orthonormal aberration polynomials, wherein Zernike DDT and Legendre DDT polynomials are proposed. As an example, we investigate the aberration evolution inside a tilted multilayer Laue lens. The corresponding Legendre DDT polynomials are obtained numerically, which represent balanced aberrations yielding minimum variance of the classical aberrations of an anamorphic optical system. The balancing of classical aberrations and their standard deviations are discussed. We also present the Strehl ratio of the primary and secondary balanced aberrations.
The Effects of Walking Workstations on Biomechanical Performance.
Grindle, Daniel M; Baker, Lauren; Furr, Mike; Puterio, Tim; Knarr, Brian; Higginson, Jill
2018-04-03
Prolonged sitting has been associated with negative health effects. Walking workstations have become increasingly popular in the workplace. There is a lack of research on the biomechanical effect of walking workstations. This study analyzed whether walking while working alters normal gait patterns. Nine participants completed four walking trials at 2.4 km·h -1 and 4.0 km·h -1 : baseline walking condition, walking while performing a math task, a reading task, and a typing task. Biomechanical data were collected using standard motion capture procedures. The first maximum vertical ground reaction force, stride width, stride length, minimum toe clearance, peak swing hip abduction and flexion angles, peak swing and stance ankle dorsiflexion and knee flexion angles were analyzed. Differences between conditions were evaluated using analysis of variance tests with Bonferroni correction (p ≤ 0.05). Stride width decreased during the reading task at both speeds. Although other parameters exhibited significant differences when multitasking, these changes were within the normal range of gait variability. It appears that for short periods, walking workstations do not negatively impact gait in healthy young adults.
Yadav, Kartikey K; Dasgupta, Kinshuk; Singh, Dhruva K; Varshney, Lalit; Singh, Harvinderpal
2015-03-06
Polyethersulfone-based beads encapsulating di-2-ethylhexyl phosphoric acid have been synthesized and evaluated for the recovery of rare earth values from the aqueous media. Percentage recovery and the sorption behavior of Dy(III) have been investigated under wide range of experimental parameters using these beads. Taguchi method utilizing L-18 orthogonal array has been adopted to identify the most influential process parameters responsible for higher degree of recovery with enhanced sorption of Dy(III) from chloride medium. Analysis of variance indicated that the feed concentration of Dy(III) is the most influential factor for equilibrium sorption capacity, whereas aqueous phase acidity influences the percentage recovery most. The presence of polyvinyl alcohol and multiwalled carbon nanotube modified the internal structure of the composite beads and resulted in uniform distribution of organic extractant inside polymeric matrix. The experiment performed under optimum process conditions as predicted by Taguchi method resulted in enhanced Dy(III) recovery and sorption capacity by polymeric beads with minimum standard deviation. Copyright © 2015 Elsevier B.V. All rights reserved.
Measurements of the earth radiation budget from satellites during the first GARP global experiment
NASA Technical Reports Server (NTRS)
Vonder Haar, T. H.; Campbell, G. G.; Smith, E. A.; Arking, A.; Coulson, K.; Hickey, J.; House, F.; Ingersoll, A.; Jacobowitz, H.; Smith, L.
1981-01-01
Radiation budget data (which will aid in climate model development) and solar constant measurements (both to be used for the study of long term climate change and interannual seasonal weather variability) are presented, obtained during Nimbus-6 and Nimbus-7 satellite flights, using wide-field-of-view, scanner, and black cavity detectors. Data on the solar constant, described as a function of the date of measurement, are given. The unweighed mean amounts to 1377 + or - 20 per sq Wm, with a standard deviation of 8 per sq Wm. The new solar data are combined with earlier measurements, and it is suggested that the total absolute energy output of the sun is a minimum at 'solar maximum' and vice versa. Attention is given to the measurements of the net radiation budget, the planetary albedo, and the infrared radiant exitance. The annual and semiannual cycles of normal variability explain most of the variance of energy exchange between the earth and space. Examination of separate ocean and atmospheric energy budgets implies a net continent-ocean region energy exchange.
Drakouli, Maria; Petsios, Konstantinos; Matziou, Vasiliki
2016-05-09
Theme: Cardiology Introduction: Measuring quality of life (QoL) in children and adolescents with congenital heart disease (CHD) is of great clinical importance. The aim of the study was: (a) to adapt the PedsQL Cardiac Module for children aged two to 18 years with CHD in a sample of the Greek population; (b) to determine its reliability and validity. Forward and backward translation methodology was used. Parents and children completed the instrument during: (a) hospitalization and (b) visits in the paediatric cardiology outpatient department. Cross-informant variance between children and parents was thoroughly assessed. Missing item responses did not exceed 5%. All internal consistency reliability coefficients for the inventory exceeded the minimum standards for group comparisons, over 0.75. Hypothesized correlations between cardiac module and core scales were statistically significant, (p<0.05). Agreement between children and parents was relatively high. Pilot study results will be additionally presented. The findings support the feasibility, reliability and validity of the Greek translation of the PedsQL Cardiac Module in children with congenital heart defect (CHD).
The Actual Mass of the Object Orbiting Epsilon Eridani
NASA Astrophysics Data System (ADS)
Gatewood, G.
2000-10-01
We have tested our 112 Multichannel Astrometric Photometer (MAP) (Gatewood 1987, AJ 94, 213) observations (beginning in 1988) of Epsilon Eridani against the orbital elements provided to us by W. Cochran (private communication). The reduction algorithm is detailed most recently by Gatewood, Han, and Black (2000 ApJ Letters, in press). The seven year period is clearly shown in a variance vs trial periods plot. Although it is near the limit of the current instrument, the astrometric orbital motion is apparent in the residuals to a standard derivation of the star's proper motion and parallax. The astrometric orbital parameters derived by forcing the spectroscopic elements are: semimajor axis = 1.51 +/- 0.44 mas, node of the orbit on the sky = 120 +/- 28 deg, inclination out of the plane of the sky = 46 +/- 17 deg, actual mass = 1.2 +/- 0.33 times that of Jupiter. Our study confirms this object (this is not a minimum mass) as the nearest extrasolar Jupiter mass companion to our solar system. In view of its large orbital eccentricity, however, its exact nature remains unclear.
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
Analysis of Variance with Summary Statistics in Microsoft® Excel®
ERIC Educational Resources Information Center
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 1 2013-07-01 2013-07-01 false Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 41 Public Contracts and Property Management 1 2011-07-01 2009-07-01 true Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...
9 CFR 3.100 - Special considerations regarding compliance and/or variance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Special considerations regarding compliance and/or variance. 3.100 Section 3.100 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE ANIMAL WELFARE STANDARDS Specifications for the Humane Handling...
9 CFR 3.100 - Special considerations regarding compliance and/or variance.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Special considerations regarding compliance and/or variance. 3.100 Section 3.100 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE ANIMAL WELFARE STANDARDS Specifications for the Humane Handling...
9 CFR 3.100 - Special considerations regarding compliance and/or variance.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Special considerations regarding compliance and/or variance. 3.100 Section 3.100 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE ANIMAL WELFARE STANDARDS Specifications for the Humane Handling...
9 CFR 3.100 - Special considerations regarding compliance and/or variance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false Special considerations regarding compliance and/or variance. 3.100 Section 3.100 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE ANIMAL WELFARE STANDARDS Specifications for the Humane Handling...
9 CFR 3.100 - Special considerations regarding compliance and/or variance.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Special considerations regarding compliance and/or variance. 3.100 Section 3.100 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE ANIMAL WELFARE STANDARDS Specifications for the Humane Handling...
An improved method for bivariate meta-analysis when within-study correlations are unknown.
Hong, Chuan; D Riley, Richard; Chen, Yong
2018-03-01
Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.
Damond, F; Benard, A; Balotta, Claudia; Böni, Jürg; Cotten, Matthew; Duque, Vitor; Ferns, Bridget; Garson, Jeremy; Gomes, Perpetua; Gonçalves, Fátima; Gottlieb, Geoffrey; Kupfer, Bernd; Ruelle, Jean; Rodes, Berta; Soriano, Vicente; Wainberg, Mark; Taieb, Audrey; Matheron, Sophie; Chene, Genevieve; Brun-Vezinet, Francoise
2011-10-01
Accurate HIV-2 plasma viral load quantification is crucial for adequate HIV-2 patient management and for the proper conduct of clinical trials and international cohort collaborations. This study compared the homogeneity of HIV-2 RNA quantification when using HIV-2 assays from ACHI(E)V(2E) study sites and either in-house PCR calibration standards or common viral load standards supplied to all collaborators. Each of the 12 participating laboratories quantified blinded HIV-2 samples, using its own HIV-2 viral load assay and standard as well as centrally validated and distributed common HIV-2 group A and B standards (http://www.hiv.lanl.gov/content/sequence/HelpDocs/subtypes-more.html). Aliquots of HIV-2 group A and B strains, each at 2 theoretical concentrations (2.7 and 3.7 log(10) copies/ml), were tested. Intralaboratory, interlaboratory, and overall variances of quantification results obtained with both standards were compared using F tests. For HIV-2 group A quantifications, overall and interlaboratory and/or intralaboratory variances were significantly lower when using the common standard than when using in-house standards at the concentration levels of 2.7 log(10) copies/ml and 3.7 log(10) copies/ml, respectively. For HIV-2 group B, a high heterogeneity was observed and the variances did not differ according to the type of standard used. In this international collaboration, the use of a common standard improved the homogeneity of HIV-2 group A RNA quantification only. The diversity of HIV-2 group B, particularly in PCR primer-binding regions, may explain the heterogeneity in quantification of this strain. Development of a validated HIV-2 viral load assay that accurately quantifies distinct circulating strains is needed.
Damond, F.; Benard, A.; Balotta, Claudia; Böni, Jürg; Cotten, Matthew; Duque, Vitor; Ferns, Bridget; Garson, Jeremy; Gomes, Perpetua; Gonçalves, Fátima; Gottlieb, Geoffrey; Kupfer, Bernd; Ruelle, Jean; Rodes, Berta; Soriano, Vicente; Wainberg, Mark; Taieb, Audrey; Matheron, Sophie; Chene, Genevieve; Brun-Vezinet, Francoise
2011-01-01
Accurate HIV-2 plasma viral load quantification is crucial for adequate HIV-2 patient management and for the proper conduct of clinical trials and international cohort collaborations. This study compared the homogeneity of HIV-2 RNA quantification when using HIV-2 assays from ACHIEV2E study sites and either in-house PCR calibration standards or common viral load standards supplied to all collaborators. Each of the 12 participating laboratories quantified blinded HIV-2 samples, using its own HIV-2 viral load assay and standard as well as centrally validated and distributed common HIV-2 group A and B standards (http://www.hiv.lanl.gov/content/sequence/HelpDocs/subtypes-more.html). Aliquots of HIV-2 group A and B strains, each at 2 theoretical concentrations (2.7 and 3.7 log10 copies/ml), were tested. Intralaboratory, interlaboratory, and overall variances of quantification results obtained with both standards were compared using F tests. For HIV-2 group A quantifications, overall and interlaboratory and/or intralaboratory variances were significantly lower when using the common standard than when using in-house standards at the concentration levels of 2.7 log10 copies/ml and 3.7 log10 copies/ml, respectively. For HIV-2 group B, a high heterogeneity was observed and the variances did not differ according to the type of standard used. In this international collaboration, the use of a common standard improved the homogeneity of HIV-2 group A RNA quantification only. The diversity of HIV-2 group B, particularly in PCR primer-binding regions, may explain the heterogeneity in quantification of this strain. Development of a validated HIV-2 viral load assay that accurately quantifies distinct circulating strains is needed. PMID:21813718
Scale structure: Processing Minimum Standard and Maximum Standard Scalar Adjectives
Frazier, Lyn; Clifton, Charles; Stolterfoht, Britta
2008-01-01
Gradable adjectives denote a function that takes an object and returns a measure of the degree to which the object possesses some gradable property (Kennedy, 1999). Scales, ordered sets of degrees, have begun to be studied systematically in semantics (Kennedy, to appear, Kennedy & McNally, 2005, Rotstein & Winter, 2004). We report four experiments designed to investigate the processing of absolute adjectives with a maximum standard (e.g., clean) and their minimum standard antonyms (dirty). The central hypothesis is that the denotation of an absolute adjective introduces a ‘standard value’ on a scale as part of the normal comprehension of a sentence containing the adjective (the “Obligatory Scale” hypothesis). In line with the predictions of Kennedy and McNally (2005) and Rotstein and Winter (2004), maximum standard adjectives and minimum standard adjectives systematically differ from each other when they are combined with minimizing modifiers like slightly, as indicated by speeded acceptability judgments. An eye movement recording study shows that, as predicted by the Obligatory Scale hypothesis, the penalty due to combining slightly with a maximum standard adjective can be observed during the processing of the sentence; the penalty is not the result of some after-the-fact inferencing mechanism. Further, a type of ‘quantificational variability effect’ may be observed when a quantificational adverb (mostly) is combined with a minimum standard adjective in sentences like The dishes are mostly dirty, which may receive either a degree interpretation (e.g. 80% dirty) or a quantity interpretation (e.g., 80% of the dishes are dirty). The quantificational variability results provide suggestive support for the Obligatory Scale hypothesis by showing that the standard of a scalar adjective influences the preferred interpretation of other constituents in the sentence. PMID:17376422
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-12
... 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and..., Takeoff Minimums and Obstacle DP, Amdt 2 Perham, MN, Perham Muni, RNAV (GPS) RWY 13, Orig Perham, MN, Perham Muni, RNAV (GPS) RWY 31, Amdt 1 Perham, MN, Perham Muni, Takeoff Minimums and Obstacle DP, Amdt 1...
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2011 CFR
2011-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
Code of Federal Regulations, 2011 CFR
2011-04-01
... OF CLASS II GAMES § 547.9 What are the minimum technical standards for Class II gaming system... digits to accommodate the design of the game. (3) Accounting data displayed to the player may be... audit, configuration, recall and test modes; or (ii) Temporarily, during entertaining displays of game...
25 CFR 547.10 - What are the minimum standards for Class II gaming system critical events?
Code of Federal Regulations, 2011 CFR
2011-04-01
... GAMES § 547.10 What are the minimum standards for Class II gaming system critical events? This section... component can no longer be considered reliable. Accordingly, any game play on the affected component shall... or the medium itself has some fault.Any game play on the affected component shall cease immediately...
Code of Federal Regulations, 2010 CFR
2010-04-01
... OF CLASS II GAMES § 547.9 What are the minimum technical standards for Class II gaming system... digits to accommodate the design of the game. (3) Accounting data displayed to the player may be... audit, configuration, recall and test modes; or (ii) Temporarily, during entertaining displays of game...
Code of Federal Regulations, 2011 CFR
2011-04-01
... OF CLASS II GAMES § 547.7 What are the minimum technical hardware standards applicable to Class II... the game, and are specially manufactured or proprietary and not off-the-shelf, shall display a unique... outcome or integrity of any game, progressive award, financial instrument, cashless transaction, voucher...
Code of Federal Regulations, 2012 CFR
2012-04-01
... OF CLASS II GAMES § 547.9 What are the minimum technical standards for Class II gaming system... digits to accommodate the design of the game. (3) Accounting data displayed to the player may be... audit, configuration, recall and test modes; or (ii) Temporarily, during entertaining displays of game...
Code of Federal Regulations, 2012 CFR
2012-04-01
... OF CLASS II GAMES § 547.7 What are the minimum technical hardware standards applicable to Class II... the game, and are specially manufactured or proprietary and not off-the-shelf, shall display a unique... outcome or integrity of any game, progressive award, financial instrument, cashless transaction, voucher...
25 CFR 547.10 - What are the minimum standards for Class II gaming system critical events?
Code of Federal Regulations, 2012 CFR
2012-04-01
... GAMES § 547.10 What are the minimum standards for Class II gaming system critical events? This section... component can no longer be considered reliable. Accordingly, any game play on the affected component shall... or the medium itself has some fault.Any game play on the affected component shall cease immediately...
25 CFR 547.10 - What are the minimum standards for Class II gaming system critical events?
Code of Federal Regulations, 2010 CFR
2010-04-01
... GAMES § 547.10 What are the minimum standards for Class II gaming system critical events? This section... be considered reliable. Accordingly, any game play on the affected component shall cease immediately... has some fault.Any game play on the affected component shall cease immediately, and an appropriate...
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2012 CFR
2012-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2010 CFR
2010-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
Code of Federal Regulations, 2010 CFR
2010-04-01
... OF CLASS II GAMES § 547.7 What are the minimum technical hardware standards applicable to Class II... the game, and are specially manufactured or proprietary and not off-the-shelf, shall display a unique... outcome or integrity of any game, progressive award, financial instrument, cashless transaction, voucher...
A Study of Minimum Competency Programs. Final Comprehensive Report. Vol. 1. Vol. 2.
ERIC Educational Resources Information Center
Gorth, William Phillip; Perkins, Marcy R.
The status of minimum competency testing programs, as of June 30, 1979, is given through descriptions of 31 state programs and 20 local district programs. For each program, the following information is provided: legislative and policy history; implementation phase; goals; competencies to be tested; standards and standard setting; target groups and…