RFI in hybrid loops - Simulation and experimental results.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.
1972-01-01
A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.
Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L
2013-08-13
United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Rowe, Penny M; Neshyba, Steven P; Walden, Von P
2011-03-14
An analytical expression for the variance of the radiance measured by Fourier-transform infrared (FTIR) emission spectrometers exists only in the limit of low noise. Outside this limit, the variance needs to be calculated numerically. In addition, a criterion for low noise is needed to identify properly calibrated radiances and optimize the instrument bandwidth. In this work, the variance and the magnitude of a noise-dependent spectral bias are calculated as a function of the system responsivity (r) and the noise level in its estimate (σr). The criterion σr/r<0.3, applied to downwelling and upwelling FTIR emission spectra, shows that the instrument bandwidth is specified properly for one instrument but needs to be restricted for another.
Object aggregation using Neyman-Pearson analysis
NASA Astrophysics Data System (ADS)
Bai, Li; Hinman, Michael L.
2003-04-01
This paper presents a novel approach to: 1) distinguish military vehicle groups, and 2) identify names of military vehicle convoys in the level-2 fusion process. The data is generated from a generic Ground Moving Target Indication (GMTI) simulator that utilizes Matlab and Microsoft Access. This data is processed to identify the convoys and number of vehicles in the convoy, using the minimum timed distance variance (MTDV) measurement. Once the vehicle groups are formed, convoy association is done using hypothesis techniques based upon Neyman Pearson (NP) criterion. One characteristic of NP is the low error probability when a-priori information is unknown. The NP approach was demonstrated with this advantage over a Bayesian technique.
Revealing Hidden Einstein-Podolsky-Rosen Nonlocality
NASA Astrophysics Data System (ADS)
Walborn, S. P.; Salles, A.; Gomes, R. M.; Toscano, F.; Souto Ribeiro, P. H.
2011-04-01
Steering is a form of quantum nonlocality that is intimately related to the famous Einstein-Podolsky-Rosen (EPR) paradox that ignited the ongoing discussion of quantum correlations. Within the hierarchy of nonlocal correlations appearing in nature, EPR steering occupies an intermediate position between Bell nonlocality and entanglement. In continuous variable systems, EPR steering correlations have been observed by violation of Reid’s EPR inequality, which is based on inferred variances of complementary observables. Here we propose and experimentally test a new criterion based on entropy functions, and show that it is more powerful than the variance inequality for identifying EPR steering. Using the entropic criterion our experimental results show EPR steering, while the variance criterion does not. Our results open up the possibility of observing this type of nonlocality in a wider variety of quantum states.
Combinatorics of least-squares trees.
Mihaescu, Radu; Pachter, Lior
2008-09-09
A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.
ERIC Educational Resources Information Center
Lee, Wan-Fung; Bulcock, Jeffrey Wilson
The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
Revealing hidden Einstein-Podolsky-Rosen nonlocality.
Walborn, S P; Salles, A; Gomes, R M; Toscano, F; Souto Ribeiro, P H
2011-04-01
Steering is a form of quantum nonlocality that is intimately related to the famous Einstein-Podolsky-Rosen (EPR) paradox that ignited the ongoing discussion of quantum correlations. Within the hierarchy of nonlocal correlations appearing in nature, EPR steering occupies an intermediate position between Bell nonlocality and entanglement. In continuous variable systems, EPR steering correlations have been observed by violation of Reid's EPR inequality, which is based on inferred variances of complementary observables. Here we propose and experimentally test a new criterion based on entropy functions, and show that it is more powerful than the variance inequality for identifying EPR steering. Using the entropic criterion our experimental results show EPR steering, while the variance criterion does not. Our results open up the possibility of observing this type of nonlocality in a wider variety of quantum states. © 2011 American Physical Society
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-09-19
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
The Problems of Multiple Feedback Estimation.
ERIC Educational Resources Information Center
Bulcock, Jeffrey W.
The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…
Criterion-Related Validity: Assessing the Value of Subscores
ERIC Educational Resources Information Center
Davison, Mark L.; Davenport, Ernest C., Jr.; Chang, Yu-Feng; Vue, Kory; Su, Shiyang
2015-01-01
Criterion-related profile analysis (CPA) can be used to assess whether subscores of a test or test battery account for more criterion variance than does a single total score. Application of CPA to subscore evaluation is described, compared to alternative procedures, and illustrated using SAT data. Considerations other than validity and reliability…
ERIC Educational Resources Information Center
Oakland, Thomas
New strategies for evaluation criterion referenced measures (CRM) are discussed. These strategies examine the following issues: (1) the use of normed referenced measures (NRM) as CRM and then estimating the reliability and validity of such measures in terms of variance from an arbitrarily specified criterion score, (2) estimation of the…
McCann, Stewart J H
2011-01-01
The present study was conducted to determine whether individual-level correlates of sexual prejudice (i.e., conservatism-liberalism, religious fundamentalism, educational levels, urbanism, income, and living in the South) are predictive at the state level of laws restricting homosexual behaviors and desires. Criterion 1 was a multifaceted index of state laws concerning gay men and lesbians; Criterion 2 was an index of state laws regarding same-sex partnerships. Multiple regression strategies showed that state conservatism-liberalism, as determined from the responses of 141,798 individuals aggregated at the state level (Erikson, Wright, & McIver, 1993), was the prime state-level predictor of both criteria. For Criterion 1, only Southern state status accounted for additional variance (4.2%) above the 54.8% already accounted for by conservatism-liberalism. For Criterion 2, no other variables accounted for variance beyond the 44.6% accounted for by state conservatism-liberalism.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Motkova, A. V.
2018-01-01
A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-01-01
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2–3.9 cm and 4.8–5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8–24.7 cm and a minimum of 3.1–6.9 cm. PMID:27657064
Automatic discovery of optimal classes
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew
1986-01-01
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-14
... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...
Isolating and Examining Sources of Suppression and Multicollinearity in Multiple Linear Regression.
Beckstead, Jason W
2012-03-30
The presence of suppression (and multicollinearity) in multiple regression analysis complicates interpretation of predictor-criterion relationships. The mathematical conditions that produce suppression in regression analysis have received considerable attention in the methodological literature but until now nothing in the way of an analytic strategy to isolate, examine, and remove suppression effects has been offered. In this article such an approach, rooted in confirmatory factor analysis theory and employing matrix algebra, is developed. Suppression is viewed as the result of criterion-irrelevant variance operating among predictors. Decomposition of predictor variables into criterion-relevant and criterion-irrelevant components using structural equation modeling permits derivation of regression weights with the effects of criterion-irrelevant variance omitted. Three examples with data from applied research are used to illustrate the approach: the first assesses child and parent characteristics to explain why some parents of children with obsessive-compulsive disorder accommodate their child's compulsions more so than do others, the second examines various dimensions of personal health to explain individual differences in global quality of life among patients following heart surgery, and the third deals with quantifying the relative importance of various aptitudes for explaining academic performance in a sample of nursing students. The approach is offered as an analytic tool for investigators interested in understanding predictor-criterion relationships when complex patterns of intercorrelation among predictors are present and is shown to augment dominance analysis.
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
ERIC Educational Resources Information Center
Levy, Deborah L.; Bowman, Elizabeth A.; Abel, Larry; Krastoshevsky, Olga; Krause, Verena; Mendell, Nancy R.
2008-01-01
The "co-familiality" criterion for an endophenotype has two requirements: (1) clinically unaffected relatives as a group should show both a shift in mean performance and an increase in variance compared with controls; (2) performance scores should be heritable. Performance on the antisaccade task is one of several candidate endophenotypes for…
The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Goldstein, M. L.
2006-01-01
We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
A Case for Transforming the Criterion of a Predictive Validity Study
ERIC Educational Resources Information Center
Patterson, Brian F.; Kobrin, Jennifer L.
2011-01-01
This study presents a case for applying a transformation (Box and Cox, 1964) of the criterion used in predictive validity studies. The goals of the transformation were to better meet the assumptions of the linear regression model and to reduce the residual variance of fitted (i.e., predicted) values. Using data for the 2008 cohort of first-time,…
Ercanli, İlker; Kahriman, Aydın
2015-03-01
We assessed the effect of stand structural diversity, including the Shannon, improved Shannon, Simpson, McIntosh, Margelef, and Berger-Parker indices, on stand aboveground biomass (AGB) and developed statistical prediction models for the stand AGB values, including stand structural diversity indices and some stand attributes. The AGB prediction model, including only stand attributes, accounted for 85 % of the total variance in AGB (R (2)) with an Akaike's information criterion (AIC) of 807.2407, Bayesian information criterion (BIC) of 809.5397, Schwarz Bayesian criterion (SBC) of 818.0426, and root mean square error (RMSE) of 38.529 Mg. After inclusion of the stand structural diversity into the model structure, considerable improvement was observed in statistical accuracy, including 97.5 % of the total variance in AGB, with an AIC of 614.1819, BIC of 617.1242, SBC of 633.0853, and RMSE of 15.8153 Mg. The predictive fitting results indicate that some indices describing the stand structural diversity can be employed as significant independent variables to predict the AGB production of the Scotch pine stand. Further, including the stand diversity indices in the AGB prediction model with the stand attributes provided important predictive contributions in estimating the total variance in AGB.
NASA Astrophysics Data System (ADS)
Iyyappan, I.; Ponmurugan, M.
2018-03-01
A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \
Lykins, Amy D; Cantor, James M; Kuban, Michael E; Blak, Thomas; Dickey, Robert; Klassen, Philip E; Blanchard, Ray
2010-03-01
Phallometric testing is widely considered the best psychophysiological procedure for assessing erotic preferences in men. Researchers have differed, however, on the necessity of setting some minimum criterion of penile response for ascertaining the interpretability of a phallometric test result. Proponents of a minimum criterion have generally based their view on the intuitive notion that "more is better" rather than any formal demonstration of this. The present study was conducted to investigate whether there is any empirical evidence for this intuitive notion, by examining the relation between magnitude of penile response and the agreement in diagnoses obtained in two test sessions using different laboratory stimuli. The results showed that examinees with inconsistent diagnoses responded less on both tests and that examinees with inconsistent diagnoses responded less on the second test after controlling for their response on the first test. Results also indicated that at response levels less than 1 cm(3), diagnostic consistency was no better than chance, supporting the establishment of a minimum response level criterion.
An approximate spin design criterion for monoplanes, 1 May 1939
NASA Technical Reports Server (NTRS)
Seidman, O.; Donlan, C. J.
1976-01-01
An approximate empirical criterion, based on the projected side area and the mass distribution of the airplane, was formulated. The British results were analyzed and applied to American designs. A simpler design criterion, based solely on the type and the dimensions of the tail, was developed; it is useful in a rapid estimation of whether a new design is likely to comply with the minimum requirements for safety in spinning.
Energy Efficiency Building Code for Commercial Buildings in Sri Lanka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, John; Greenberg, Steve; Rubinstein, Francis
2000-09-30
1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.
Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David
2018-07-01
To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
12 CFR 1263.17 - Rebuttable presumptions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... from the rating required by § 1263.11(b)(3)(i), or a variance from a performance trend criterion... any criminal, civil or administrative proceedings reflecting upon creditworthiness, business judgment...
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
On optimal current patterns for electrical impedance tomography.
Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D
2005-02-01
We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.
Food addiction associations with psychological distress among people with type 2 diabetes.
Raymond, Karren-Lee; Lovell, Geoff P
2016-01-01
To assess the relationship between a food addiction (FA) model and psychological distress among a type 2 diabetes (t2d) sample. A cross-sectional study of 334 participants with t2d diagnoses were invited to complete a web-based questionnaire. We measured variables of psychological distress implementing the Depression Anxiety and Stress Scale (DASS-21), the Yale Food Addiction Scale (YFAS), and other factors associated with t2d. In our study a novel finding highlighted people with t2d meeting the FA criterion had significantly higher depression, anxiety, and stress scores as compared to participants who did not meet the FA criterion. Moreover, FA symptomology explained 35% of the unique variance in depression scores, 34% of the unique variance in anxiety scores, and 34% of the unique variance in stress scores, while surprisingly, BMI explained less than 1% of the unique variance in scores. We identified that psychological distress among people with t2d was associated with the FA model, apparently more so than BMI, thereby indicating further research being necessary lending support for future research in this realm. Moreover the FA model may be beneficial when addressing treatment approaches for psychological distress among people with t2d. Copyright © 2016 Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...
12 CFR 925.17 - Rebuttable presumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... rating required by § 925.11(b)(3)(i) of this part, or a variance from a performance trend criterion... creditworthiness, business judgment, or moral turpitude since the most recent regulatory examination report, the...
Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process
NASA Astrophysics Data System (ADS)
Yan, Wei; Chang, Yuwen
2016-12-01
Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.
Abbas, Ismail; Rovira, Joan; Casanovas, Josep
2006-12-01
To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.
Influences on Academic Achievement Across High and Low Income Countries: A Re-Analysis of IEA Data.
ERIC Educational Resources Information Center
Heyneman, S.; Loxley, W.
Previous international studies of science achievement put the data through a process of winnowing to decide which variables to keep in the final regressions. Variables were allowed to enter the final regressions if they met a minimum beta coefficient criterion of 0.05 averaged across rich and poor countries alike. The criterion was an average…
ERIC Educational Resources Information Center
Garcia-Quintana, Roan A.; Mappus, M. Lynne
1980-01-01
Norm referenced data were utilized for determining the mastery cutoff score on a criterion referenced test. Once a cutoff score on the norm referenced measure is selected, the cutoff score on the criterion referenced measure becomes that score which maximizes proportion of consistent classifications and proportion of improvement beyond change. (CP)
D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.
Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W
2005-12-01
Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.
Fuzzy approaches to supplier selection problem
NASA Astrophysics Data System (ADS)
Ozkok, Beyza Ahlatcioglu; Kocken, Hale Gonce
2013-09-01
Supplier selection problem is a multi-criteria decision making problem which includes both qualitative and quantitative factors. In the selection process many criteria may conflict with each other, therefore decision-making process becomes complicated. In this study, we handled the supplier selection problem under uncertainty. In this context; we used minimum criterion, arithmetic mean criterion, regret criterion, optimistic criterion, geometric mean and harmonic mean. The membership functions created with the help of the characteristics of used criteria, and we tried to provide consistent supplier selection decisions by using these memberships for evaluating alternative suppliers. During the analysis, no need to use expert opinion is a strong aspect of the methodology used in the decision-making.
Physical Employment Standards for UK Firefighters
Stevenson, Richard D.M.; Siddall, Andrew G.; Turner, Philip F.J.; Bilzon, James L.J.
2017-01-01
Objective: The aim of this study was to assess sensitivity and specificity of surrogate physical ability tests as predictors of criterion firefighting task performance and to identify corresponding minimum muscular strength and endurance standards. Methods: Fifty-one (26 male; 25 female) participants completed three criterion tasks (ladder lift, ladder lower, ladder extension) and three corresponding surrogate tests [one-repetition maximum (1RM) seated shoulder press; 1RM seated rope pull-down; repeated 28 kg seated rope pull-down]. Surrogate test standards were calculated that best identified individuals who passed (sensitivity; true positives) and failed (specificity; true negatives) criterion tasks. Results: Best sensitivity/specificity achieved were 1.00/1.00 for a 35 kg seated shoulder press, 0.79/0.92 for a 60 kg rope pull-down, and 0.83/0.93 for 23 repetitions of the 28 kg rope pull-down. Conclusions: These standards represent performance on surrogate tests commensurate with minimum acceptable performance of essential strength-based occupational tasks in UK firefighters. PMID:28045801
Approximation for the Rayleigh Resolution of a Circular Aperture
ERIC Educational Resources Information Center
Mungan, Carl E.
2009-01-01
Rayleigh's criterion states that a pair of point sources are barely resolved by an optical instrument when the central maximum of the diffraction pattern due to one source coincides with the first minimum of the pattern of the other source. As derived in standard introductory physics textbooks, the first minimum for a rectangular slit of width "a"…
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S
2015-08-07
Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
New true-triaxial rock strength criteria considering intrinsic material characteristics
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Cheng; Quan, Xiaowei; Wang, Yanning; Yu, Liyuan; Jiang, Binsong
2018-02-01
A reasonable strength criterion should reflect the hydrostatic pressure effect, minimum principal stress effect, and intermediate principal stress effect. The former two effects can be described by the meridian curves, and the last one mainly depends on the Lode angle dependence function. Among three conventional strength criteria, i.e. Mohr-Coulomb (MC), Hoek-Brown (HB), and Exponent (EP) criteria, the difference between generalized compression and extension strength of EP criterion experience a firstly increase then decrease process, and tends to be zero when hydrostatic pressure is big enough. This is in accordance with intrinsic rock strength characterization. Moreover, the critical hydrostatic pressure I_c corresponding to the maximum difference of between generalized compression and extension strength can be easily adjusted by minimum principal stress influence parameter K. So, the exponent function is a more reasonable meridian curves, which well reflects the hydrostatic pressure effect and is employed to describe the generalized compression and extension strength. Meanwhile, three Lode angle dependence functions of L_{{MN}}, L_{{WW}}, and L_{{YMH}}, which unconditionally satisfy the convexity and differential requirements, are employed to represent the intermediate principal stress effect. Realizing the actual strength surface should be located between the generalized compression and extension surface, new true-triaxial criteria are proposed by combining the two states of EP criterion by Lode angle dependence function with a same lode angle. The proposed new true-triaxial criteria have the same strength parameters as EP criterion. Finally, 14 groups of triaxial test data are employed to validate the proposed criteria. The results show that the three new true-triaxial exponent criteria, especially the Exponent Willam-Warnke criterion (EPWW) criterion, give much lower misfits, which illustrates that the EP criterion and L_{{WW}} have more reasonable meridian and deviatoric function form, respectively. The proposed new true-triaxial strength criteria can provide theoretical foundation for stability analysis and optimization of support design of rock engineering.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
Development of a short version of the modified Yale Preoperative Anxiety Scale.
Jenkins, Brooke N; Fortier, Michelle A; Kaplan, Sherrie H; Mayes, Linda C; Kain, Zeev N
2014-09-01
The modified Yale Preoperative Anxiety Scale (mYPAS) is the current "criterion standard" for assessing child anxiety during induction of anesthesia and has been used in >100 studies. This observational instrument covers 5 items and is typically administered at 4 perioperative time points. Application of this complex instrument in busy operating room (OR) settings, however, presents a challenge. In this investigation, we examined whether the instrument could be modified and made easier to use in OR settings. This study used qualitative methods, principal component analyses, Cronbach αs, and effect sizes to create the mYPAS-Short Form (mYPAS-SF) and reduce time points of assessment. Data were obtained from multiple patients (N = 3798; Mage = 5.63) who were recruited in previous investigations using the mYPAS over the past 15 years. After qualitative analysis, the "use of parent" item was eliminated due to content overlap with other items. The reduced item set accounted for 82% or more of the variance in child anxiety and produced the Cronbach α of at least 0.92. To reduce the number of time points of assessment, a minimum Cohen d effect size criterion of 0.48 change in mYPAS score across time points was used. This led to eliminating the walk to the OR and entrance to the OR time points. Reducing the mYPAS to 4 items, creating the mYPAS-SF that can be administered at 2 time points, retained the accuracy of the measure while allowing the instrument to be more easily used in clinical research settings.
Criterion Predictability: Identifying Differences Between [r-squares
ERIC Educational Resources Information Center
Malgady, Robert G.
1976-01-01
An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
Large amplitude MHD waves upstream of the Jovian bow shock
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.
1983-01-01
Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.
Forney, K Jean; Bodell, Lindsay P; Haedt-Matt, Alissa A; Keel, Pamela K
2016-07-01
Of the two primary features of binge eating, loss of control (LOC) eating is well validated while the role of eating episode size is less clear. Given the ICD-11 proposal to eliminate episode size from the binge-eating definition, the present study examined the incremental validity of the size criterion, controlling for LOC. Interview and questionnaire data come from four studies of 243 women with bulimia nervosa (n = 141) or purging disorder (n = 102). Hierarchical linear regression tested if the largest reported episode size, coded in kilocalories, explained additional variance in eating disorder features, psychopathology, personality traits, and impairment, holding constant LOC eating frequency, age, and body mass index (BMI). Analyses also tested if episode size moderated the association between LOC eating and these variables. Holding LOC constant, episode size explained significant variance in disinhibition, trait anxiety, and eating disorder-related impairment. Episode size moderated the association of LOC eating with purging frequency and depressive symptoms, such that in the presence of larger eating episodes, LOC eating was more closely associated with these features. Neither episode size nor its interaction with LOC explained additional variance in BMI, hunger, restraint, shape concerns, state anxiety, negative urgency, or global functioning. Taken together, results support the incremental validity of the size criterion, in addition to and in combination with LOC eating, for defining binge-eating episodes in purging syndromes. Future research should examine the predictive validity of episode size in both purging and nonpurging eating disorders (e.g., binge eating disorder) to inform nosological schemes. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:651-662). © 2016 Wiley Periodicals, Inc.
Anthropometry as a predictor of bench press performance done at different loads.
Caruso, John F; Taylor, Skyler T; Lutz, Brant M; Olson, Nathan M; Mason, Melissa L; Borgsmiller, Jake A; Riner, Rebekah D
2012-09-01
The purpose of our study was to examine the ability of anthropometric variables (body mass, total arm length, biacromial width) to predict bench press performance at both maximal and submaximal loads. Our methods required 36 men to visit our laboratory and submit to anthropometric measurements, followed by lifting as much weight as possible in good form one time (1 repetition maximum, 1RM) in the exercise. They made 3 more visits in which they performed 4 sets of bench presses to volitional failure at 1 of 3 (40, 55, or 75% 1RM) submaximal loads. An accelerometer (Myotest Inc., Royal Oak MI) measured peak force, velocity, and power after each submaximal load set. With stepwise multivariate regression, our 3 anthropometric variables attempted to explain significant amounts of variance for 13 bench press performance indices. For criterion measures that reached significance, separate Pearson product moment correlation coefficients further assessed if the strength of association each anthropometric variable had with the criterion was also significant. Our analyses showed that anthropometry explained significant amounts (p < 0.05) of variance for 8 criterion measures. It was concluded that body mass had strong univariate correlations with 1RM and force-related measures, total arm length was moderately associated with 1RM and criterion variables at the lightest load, whereas biacromial width had an inverse relationship with the peak number of repetitions performed per set at the 2 lighter loads. Practical applications suggest results may help coaches and practitioners identify anthropometric features that may best predict various measures of bench press prowess in athletes.
Splett, Joni W; Smith-Millman, Marissa; Raborn, Anthony; Brann, Kristy L; Flaspohler, Paul D; Maras, Melissa A
2018-03-08
The current study examined between-teacher variance in teacher ratings of student behavioral and emotional risk to identify student, teacher and classroom characteristics that predict such differences and can be considered in future research and practice. Data were taken from seven elementary schools in one school district implementing universal screening, including 1,241 students rated by 68 teachers. Students were mostly African America (68.5%) with equal gender (female 50.1%) and grade-level distributions. Teachers, mostly White (76.5%) and female (89.7%), completed both a background survey regarding their professional experiences and demographic characteristics and the Behavior Assessment System for Children (Second Edition) Behavioral and Emotional Screening System-Teacher Form for all students in their class, rating an average of 17.69 students each. Extant student data were provided by the district. Analyses followed multilevel linear model stepwise model-building procedures. We detected a significant amount of variance in teachers' ratings of students' behavioral and emotional risk at both student and teacher/classroom levels with student predictors explaining about 39% of student-level variance and teacher/classroom predictors explaining about 20% of between-teacher differences. The final model fit the data (Akaike information criterion = 8,687.709; pseudo-R2 = 0.544) significantly better than the null model (Akaike information criterion = 9,457.160). Significant predictors included student gender, race ethnicity, academic performance and disciplinary incidents, teacher gender, student-teacher gender interaction, teacher professional development in behavior screening, and classroom academic performance. Future research and practice should interpret teacher-rated universal screening of students' behavioral and emotional risk with consideration of the between-teacher variance unrelated to student behavior detected. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Statistically Self-Consistent and Accurate Errors for SuperDARN Data
NASA Astrophysics Data System (ADS)
Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.
2018-01-01
The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
An application of the LC-LSTM framework to the self-esteem instability case.
Alessandri, Guido; Vecchione, Michele; Donnellan, Brent M; Tisak, John
2013-10-01
The present research evaluates the stability of self-esteem as assessed by a daily version of the Rosenberg (Society and the adolescent self-image, Princeton University Press, Princeton, 1965) general self-esteem scale (RGSE). The scale was administered to 391 undergraduates for five consecutive days. The longitudinal data were analyzed using the integrated LC-LSTM framework that allowed us to evaluate: (1) the measurement invariance of the RGSE, (2) its stability and change across the 5-day assessment period, (3) the amount of variance attributable to stable and transitory latent factors, and (4) the criterion-related validity of these factors. Results provided evidence for measurement invariance, mean-level stability, and rank-order stability of daily self-esteem. Latent state-trait analyses revealed that variances in scores of the RGSE can be decomposed into six components: stable self-esteem (40 %), ephemeral (or temporal-state) variance (36 %), stable negative method variance (9 %), stable positive method variance (4 %), specific variance (1 %) and random error variance (10 %). Moreover, latent factors associated with daily self-esteem were associated with measures of depression, implicit self-esteem, and grade point average.
An evaluation of the Psychache Scale on an offender population.
Mills, Jeremy F; Green, Kate; Reddon, John R
2005-10-01
This study examined the generalizability of a self-report measure of psychache to an offender population. The factor structure, construct validity, and criterion validity of the Psychache Scale was assessed on 136 male prison inmates. The results showed the Psychache Scale has a single underlying factor structure and to be strongly associated with measures of depression and hopelessness and moderately associated with psychiatric symptoms and the criterion variable of a history of prior suicide attempts. The variables of depression, hopelessness, and psychiatric symptoms all contributed unique variance to psychache. Discussion centers on psychache's theoretical application to the prediction of suicide.
Technical Guidance for Conducting ASVAB Validation/Standards Studies in the U.S. Navy
2015-02-01
the criterion), we can compute the variance of X in the unrestricted group, 2xS , and in the restricted (selected) group, 2 xs . 3 In contrast, we...well as the selected group, 2 xs . We also know the variance of Y in the selected group, 2ys , and the correlation of X and Y in the selected...and AS. Five levels of selection ratio (1.0, .8, .6, .4, and .2) and eight sample sizes (50, 75, 100, 150, 225, 350 , 500, and 800) were considered
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Genome-wide mapping of loci explaining variance in scrotal circumference in Nellore cattle
USDA-ARS?s Scientific Manuscript database
The reproductive performance of bulls has a high impact on the beef cattle industry. Scrotal circumference (SC) is the most recorded reproductive trait in beef herds, and is used as a major selection criterion to improve precocity and fertility. The characterization of genomic regions affecting SC...
Spotting Incorrect Rules in Signed-Number Arithmetic by the Individual Consistency Index.
1981-08-01
meaning of dimensionality of achievement data. It also shows the importance of construct validity, even in criterion referenced testing of the cognitive ... aspect of performance, and that the traditional means of item analysis that are based on taking the variances of binary scores and content analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harikrishnan, R.; Hareland, G.; Warpinski, N.R.
This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives themore » minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.« less
Fatigue acceptance test limit criterion for larger diameter rolled thread fasteners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kephart, A.R.
1997-05-01
This document describes a fatigue lifetime acceptance test criterion by which studs having rolled threads, larger than 1.0 inches in diameter, can be assured to meet minimum quality attributes associated with a controlled rolling process. This criterion is derived from a stress dependent, room temperature air fatigue database for test studs having a 0.625 inch diameter threads of Alloys X-750 HTH and direct aged 625. Anticipated fatigue lives of larger threads are based on thread root elastic stress concentration factors which increase with increasing thread diameters. Over the thread size range of interest, a 30% increase in notch stress ismore » equivalent to a factor of five (5X) reduction in fatigue life. The resulting diameter dependent fatigue acceptance criterion is normalized to the aerospace rolled thread acceptance standards for a 1.0 inch diameter, 0.125 inch pitch, Unified National thread with a controlled Root radius (UNR). Testing was conducted at a stress of 50% of the minimum specified material ultimate strength, 80 Ksi, and at a stress ratio (R) of 0.10. Limited test data for fastener diameters of 1.00 to 2.25 inches are compared to the acceptance criterion. Sensitivity of fatigue life of threads to test nut geometry variables was also shown to be dependent on notch stress conditions. Bearing surface concavity of the compression nuts and thread flank contact mismatch conditions can significantly affect the fastener fatigue life. Without improved controls these conditions could potentially provide misleading acceptance data. Alternate test nut geometry features are described and implemented in the rolled thread stud specification, MIL-DTL-24789(SH), to mitigate the potential effects on fatigue acceptance data.« less
[Acoustic conditions in open plan offices - Pilot test results].
Mikulski, Witold
The main source of noise in open plan office are conversations. Office work standards in such premises are attained by applying specific acoustic adaptation. This article presents the results of pilot tests and acoustic evaluation of open space rooms. Acoustic properties of 6 open plan office rooms were the subject of the tests. Evaluation parameters, measurement methods and criterial values were adopted according to the following standards: PN-EN ISO 3382- 3:2012, PN-EN ISO 3382-2:2010, PN-B-02151-4:2015-06 and PN-B-02151-3:2015-10. The reverberation time was 0.33- 0.55 s (maximum permissible value in offices - 0.6 s; the criterion was met), sound absorption coefficient in relation to 1 m2 of the room's plan was 0.77-1.58 m2 (minimum permissible value - 1.1 m2; 2 out of 6 rooms met the criterion), distraction distance was 8.5-14 m (maximum permissible value - 5 m; none of the rooms met the criterion), A-weighted sound pressure level of speech at a distance of 4 m was 43.8-54.7 dB (maximum permissible value - 48 dB; 2 out of 6 rooms met the criterion), spatial decay rate of the speech was 1.8-6.3 dB (minimum permissible value - 7 dB; none of the rooms met the criterion). Standard acoustic treatment, containing sound absorbing suspended ceiling, sound absorbing materials on the walls, carpet flooring and sound absorbing workplace barriers, is not sufficient. These rooms require specific advanced acoustic solutions. Med Pr 2016;67(5):653-662. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Podsakoff, P M; MacKenzie, S B; Bommer, W H
1996-08-01
A meta-analysis was conducted to estimate more accurately the bivariate relationships between leadership behaviors, substitutes for leadership, and subordinate attitudes, and role perceptions and performance, and to examine the relative strengths of the relationships between these variables. Estimates of 435 relationships were obtained from 22 studies containing 36 independent samples. The findings showed that the combination of the substitutes variables and leader behaviors account for a majority of the variance in employee attitudes and role perceptions and a substantial proportion of the variance in in-role and extra-role performance; on average, the substitutes for leadership uniquely accounted for more of the variance in the criterion variables than did leader behaviors.
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
O'Brien, B J; Sculpher, M J
2000-05-01
Current principles of cost-effectiveness analysis emphasize the rank ordering of programs by expected economic return (eg, quality-adjusted life-years gained per dollar expended). This criterion ignores the variance associated with the cost-effectiveness of a program, yet variance is a common measure of risk when financial investment options are appraised. Variation in health care program return is likely to be a criterion of program selection for health care managers with fixed budgets and outcome performance targets. Characterizing health care resource allocation as a risky investment problem, we show how concepts of portfolio analysis from financial economics can be adopted as a conceptual framework for presenting cost-effectiveness data from multiple programs as mean-variance data. Two specific propositions emerge: (1) the current convention of ranking programs by expected return is a special case of the portfolio selection problem in which the decision maker is assumed to be indifferent to risk, and (2) for risk-averse decision makers, the degree of joint risk or covariation in cost-effectiveness between programs will create incentives to diversify an investment portfolio. The conventional normative assumption of risk neutrality for social-level public investment decisions does not apply to a large number of health care resource allocation decisions in which health care managers seek to maximize returns subject to budget constraints and performance targets. Portfolio theory offers a useful framework for studying mean-variance tradeoffs in cost-effectiveness and offers some positive predictions (and explanations) of actual decision making in the health care sector.
Optimization of solar cell contacts by system cost-per-watt minimization
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Four-Dimensional Golden Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.
2015-02-25
The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
ERIC Educational Resources Information Center
Zwick, Rebecca
2012-01-01
Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…
Ng, Edmond S-W; Diaz-Ordaz, Karla; Grieve, Richard; Nixon, Richard M; Thompson, Simon G; Carpenter, James R
2016-10-01
Multilevel models provide a flexible modelling framework for cost-effectiveness analyses that use cluster randomised trial data. However, there is a lack of guidance on how to choose the most appropriate multilevel models. This paper illustrates an approach for deciding what level of model complexity is warranted; in particular how best to accommodate complex variance-covariance structures, right-skewed costs and missing data. Our proposed models differ according to whether or not they allow individual-level variances and correlations to differ across treatment arms or clusters and by the assumed cost distribution (Normal, Gamma, Inverse Gaussian). The models are fitted by Markov chain Monte Carlo methods. Our approach to model choice is based on four main criteria: the characteristics of the data, model pre-specification informed by the previous literature, diagnostic plots and assessment of model appropriateness. This is illustrated by re-analysing a previous cost-effectiveness analysis that uses data from a cluster randomised trial. We find that the most useful criterion for model choice was the deviance information criterion, which distinguishes amongst models with alternative variance-covariance structures, as well as between those with different cost distributions. This strategy for model choice can help cost-effectiveness analyses provide reliable inferences for policy-making when using cluster trials, including those with missing data. © The Author(s) 2013.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis, Alfredo
The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.
Cold weather paving requirements for bituminous concrete.
DOT National Transportation Integrated Search
1973-01-01
Cold weather paving specifications were developed from work by Corlew and Dickson, who used a computer solution to predict the cooling rate of bituminous concrete. Virginia had used a minimum atmospheric temperature as a criterion; however, it was ev...
Evidence for Response Bias as a Source of Error Variance in Applied Assessment
ERIC Educational Resources Information Center
McGrath, Robert E.; Mitchell, Matthew; Kim, Brian H.; Hough, Leaetta
2010-01-01
After 100 years of discussion, response bias remains a controversial topic in psychological measurement. The use of bias indicators in applied assessment is predicated on the assumptions that (a) response bias suppresses or moderates the criterion-related validity of substantive psychological indicators and (b) bias indicators are capable of…
Mobile Phone Use in a Developing Country: A Malaysian Empirical Study
ERIC Educational Resources Information Center
Yeow, Paul H. P.; Yen Yuen, Yee; Connolly, Regina
2008-01-01
This study examined the factors that influence consumer satisfaction with mobile telephone use in Malaysia. The validity of the study's constructs, criterion, and content was confirmed. Construct validity was verified through the factor analysis with a total variance of 73.72 percent explained by all six independent factors. Content validity was…
Evaluation Criterion for Quality Assessment of E-Learning Content
ERIC Educational Resources Information Center
Al-Alwani, Abdulkareem
2014-01-01
Research trends related to e-learning systems are oriented towards increasing the efficiency and capacity of the systems, thus they reflect a large variance in performance when considering content conformity and quality standards. The Framework related to standardisation of digital content for e-learning systems is likely to play a significant…
NASA Astrophysics Data System (ADS)
Bai, Yan-Kui; Li, Shu-Shen; Zheng, Hou-Zhi
2005-11-01
We present a method for checking the Peres separability criterion in an arbitrary bipartite quantum state ρAB within local operations and classical communication scenario. The method does not require noise operation which is needed in making the partial transposition map physically implementable. The main task for the two observers, Alice and Bob, is to measure some specific functions of the partial transposed matrix. With these functions, they can determine the eigenvalues of ρABTB , among which the minimum serves as an entanglement witness.
Uncertainty, imprecision, and the precautionary principle in climate change assessment.
Borsuk, M E; Tomassini, L
2005-01-01
Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.
Zubeidat, Ihab; Salinas, José María; Sierra, Juan Carlos; Fernández-Parra, Antonio
2007-01-01
In this study, we analyzed the reliability and validity of the Social Interaction Anxiety Scale (SIAS) and propose a separation criterion between youths with specific and generalized social anxiety and youths without social anxiety. A sample of 1012 Spanish youths attending school completed the SIAS, the Liebowitz Social Anxiety Scale, the Social Avoidance and Distress Scale, the Fear of Negative Evaluation Scale, the Youth Self-Report for Ages 11-18 and the Minnesota Multiphasic Personality Inventory-Adolescent. The factor analysis suggests the existence of three factors in the SIAS, the first two of which explain most of the variance of the construct assessed. Internal consistency is adequate in the first two factors. The SIAS features an adequate theoretical validity with the scores of different variables related to social interaction. Analysis of the criterion scores yields three groups pertaining to three clearly differentiated clusters. In the third cluster, two of social anxiety groups - specific and generalized - have been identified by means of a quantitative separation criterion.
Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey
1995-01-01
An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.
Design of clinical trials involving multiple hypothesis tests with a common control.
Schou, I Manjula; Marschner, Ian C
2017-07-01
Randomized clinical trials comparing several treatments to a common control are often reported in the medical literature. For example, multiple experimental treatments may be compared with placebo, or in combination therapy trials, a combination therapy may be compared with each of its constituent monotherapies. Such trials are typically designed using a balanced approach in which equal numbers of individuals are randomized to each arm, however, this can result in an inefficient use of resources. We provide a unified framework and new theoretical results for optimal design of such single-control multiple-comparator studies. We consider variance optimal designs based on D-, A-, and E-optimality criteria, using a general model that allows for heteroscedasticity and a range of effect measures that include both continuous and binary outcomes. We demonstrate the sensitivity of these designs to the type of optimality criterion by showing that the optimal allocation ratios are systematically ordered according to the optimality criterion. Given this sensitivity to the optimality criterion, we argue that power optimality is a more suitable approach when designing clinical trials where testing is the objective. Weighted variance optimal designs are also discussed, which, like power optimal designs, allow the treatment difference to play a major role in determining allocation ratios. We illustrate our methods using two real clinical trial examples taken from the medical literature. Some recommendations on the use of optimal designs in single-control multiple-comparator trials are also provided. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Measures and Interpretations of Vigilance Performance: Evidence Against the Detection Criterion
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1998-01-01
Operators' performance in a vigilance task is often assumed to depend on their choice of a detection criterion. When the signal rate is low this criterion is set high, causing the hit and false alarm rates to be low. With increasing time on task the criterion presumably tends to increase even further, thereby further decreasing the hit and false alarm rates. Virtually all of the empirical evidence for this simple interpretation is based on estimates of the bias measure Beta from signal detection theory. In this article, I describe a new approach to studying decision making that does not require the technical assumptions of signal detection theory. The results of this new analysis suggest that the detection criterion is never biased toward either response, even when the signal rate is low and the time on task is long. Two modifications of the signal detection theory framework are considered to account for this seemingly paradoxical result. The first assumes that the signal rate affects the relative sizes of the variances of the information distributions; the second assumes that the signal rate affects the logic of the operator's stopping rule. Actual or potential applications of this research include the improved training and performance assessment of operators in areas such as product quality control, air traffic control, and medical and clinical diagnosis.
ERIC Educational Resources Information Center
Owen, Steven V.; Feldhusen, John F.
This study compares the effectiveness of three models of multivariate prediction for academic success in identifying the criterion variance of achievement in nursing education. The first model involves the use of an optimum set of predictors and one equation derived from a regression analysis on first semester grade average in predicting the…
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Louis A; Mason, John J.
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
IRIS Summary and Supporting Documents for Methylmercury ...
In January 2001, U.S. EPA finalized the guidance for methylmercury in the water quality criteria for states and authorized tribes. The links below take you to the best resources for this guidance. This final Guidance for Implementing the January 2001 Methylmercury Water Quality Criterion provides technical guidance to states and authorized tribes on how they may want to use the January 2001 fish tissue-based recommended water quality criterion for methylmercury in surface water protection programs (e.g., TMDLs, NPDES permitting). The guidance addresses questions related to water quality standards adoption (e.g., site-specific criteria, variances), assessments, monitoring, TMDLs, and NPDES permitting. The guidance consolidates existing EPA guidance where relevant to mercury.
A method for minimum risk portfolio optimization under hybrid uncertainty
NASA Astrophysics Data System (ADS)
Egorova, Yu E.; Yazenin, A. V.
2018-03-01
In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G
2013-01-01
Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
An opening criterion for dust gaps in protoplanetary discs
NASA Astrophysics Data System (ADS)
Dipierro, Giovanni; Laibe, Guillaume
2017-08-01
We aim to understand under which conditions a low-mass planet can open a gap in viscous dusty protoplanetary discs. For this purpose, we extend the theory of dust radial drift to include the contribution from the tides of an embedded planet and from the gas viscous forces. From this formalism, we derive (I) a grain-size-dependent criterion for dust gap opening in discs, (II) an estimate of the location of the outer edge of the dust gap and (III) an estimate of the minimum Stokes number above which low-mass planets are able to carve gaps that appear only in the dust disc. These analytical estimates are particularly helpful to appraise the minimum mass of a hypothetical planet carving gaps in discs observed at long wavelengths and high resolution. We validate the theory against 3D smoothed particle hydrodynamics simulations of planet-disc interaction in a broad range of dusty protoplanetary discs. We find a remarkable agreement between the theoretical model and the numerical experiments.
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array
NASA Astrophysics Data System (ADS)
Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.
2018-01-01
The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.
ERIC Educational Resources Information Center
Robey, Randall R.
2004-01-01
The purpose of this tutorial is threefold: (a) review the state of statistical science regarding effect-sizes, (b) illustrate the importance of effect-sizes for interpreting findings in all forms of research and particularly for results of clinical-outcome research, and (c) demonstrate just how easily a criterion on reporting effect-sizes in…
Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George
2013-01-01
The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101
NASA Astrophysics Data System (ADS)
Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza
2018-02-01
In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
Fosco, Whitney D; Hawk, Larry W
2017-02-01
A child's ability to sustain attention over time (AOT) is critical in attention-deficit/hyperactivity disorder (ADHD), yet no prior work has examined the extent to which a child's decrement in AOT on laboratory tasks relates to clinically-relevant behavior. The goal of this study is to provide initial evidence for the criterion validity of laboratory assessments of AOT. A total of 20 children with ADHD (7-12 years of age) who were enrolled in a summer treatment program completed two lab attention tasks (a continuous performance task and a self-paced choice discrimination task) and math seatwork. Analyses focused on relations between attention task parameters and math productivity. Individual differences in overall attention (OA) measures (averaged across time) accounted for 23% of the variance in math productivity, supporting the criterion validity of lab measures of attention. The criterion validity was enhanced by consideration of changes in AOT. Performance on all laboratory attention measures deteriorated as time-on-task increased, and individual differences in the decrement in AOT accounted for 40% of the variance in math productivity. The only variable to uniquely predict math productivity was from the self-paced choice discrimination task. This study suggests that attention tasks in the lab do predict a clinically-relevant target behavior in children with ADHD, supporting their use as a means to study attention processes in a controlled environment. Furthermore, this prediction is improved when attention is examined as a function of time-on-task and when the attentional demands are consistent between lab and life contexts.
Cuesta-Vargas, Antonio Ignacio; González-Sánchez, Manuel
2014-10-29
Spanish is one of the five most spoken languages in the world. There is currently no published Spanish version of the Örebro Musculoskeletal Pain Questionnaire (OMPQ). The aim of the present study is to describe the process of translating the OMPQ into Spanish and to perform an analysis of reliability, internal structure, internal consistency and concurrent criterion-related validity. Translation and psychometric testing. Two independent translators translated the OMPQ into Spanish. From both translations a consensus version was achieved. A backward translation was made to verify and resolve any semantic or conceptual problems. A total of 104 patients (67 men/37 women) with a mean age of 53.48 (±11.63), suffering from chronic musculoskeletal disorders, twice completed a Spanish version of the OMPQ. Statistical analysis was performed to evaluate the reliability, the internal structure, internal consistency and concurrent criterion-related validity with reference to the gold standard questionnaire SF-12v2. All variables except "Coping" showed a rate above 0.85 on reliability. The internal structure calculation through exploratory factor analysis indicated that 75.2% of the variance can be explained with six components with an eigenvalue higher than 1 and 52.1% with only three components higher than 10% of variance explained. In the concurrent criterion-related validity, several significant correlations were seen close to 0.6, exceeding that value in the correlation between general health and total value of the OMPQ. The Spanish version of the screening questionnaire OMPQ can be used to identify Spanish patients with musculoskeletal pain at risk of developing a chronic disability.
The Impact of Age on Quality Measure Adherence in Colon Cancer
Steele, Scott R.; Chen, Steven L.; Stojadinovic, Alexander; Nissan, Aviram; Zhu, Kangmin; Peoples, George E.; Bilchik, Anton
2012-01-01
BACKGROUND Recently lymph node yield (LNY) has been endorsed as a quality measure of CC resection adequacy. It is unclear whether this measure is relevant to all ages. We hypothesized that total lymph node yield (LNY) is negatively correlated with increasing age and overall survival (OS). STUDY DESIGN The Surveillance, Epidemiology and End Results (SEER) database was queried for all non-metastatic CC patients diagnosed from 1992–2004 (n=101,767), grouped by age (<40, 41–45, 46–50, and in 5-year increments until 86+ years). Proportions of patients meeting the 12 LNY minimum criterion were determined in each age group, and analyzed with multivariate linear regression adjusting for demographics and AJCC 6th Edition stage. Overall survival (OS) comparisons in each age category were based on the guideline of 12 LNY. RESULTS Mean LNY decreased with increasing age (18.7 vs. 11.4 nodes/patient, youngest vs. oldest group, P<0.001). The proportion of patients meeting the 12 LNY criterion also declined with each incremental age group (61.9% vs. 35.2% compliance, youngest vs. oldest, P<0.001). Multivariate regression demonstrated a negative effect of each additional year in age and log (LNY) with coefficient of −0.003 (95% CI −0.003 to −0.002). When stratified by age and nodal yield using the 12 LNY criterion, OS was lower for all age groups in Stage II CC with <12LNY, and each age group over 60 years with <12LNY for Stage III CC (P<0.05). CONCLUSIONS Every attempt to adhere to proper oncological principles should be made at time of CC resection regardless of age. The prognostic significance of the 12 LN minimum criterion should be applied even to elderly CC patients. PMID:21601492
Some refinements on the comparison of areal sampling methods via simulation
Jeffrey Gove
2017-01-01
The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...
A comparison of coronal and interplanetary current sheet inclinations
NASA Technical Reports Server (NTRS)
Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.
1983-01-01
The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.
Ackerman, Phillip L; Chamorro-Premuzic, Tomas; Furnham, Adrian
2011-03-01
BACKGROUND. Although recent research has provided evidence for the predictive validity of personality traits in academic settings, the path to an improved understanding of the nature of personality influences on academic achievement involves a reconceptualization of both criterion and predictor construct spaces. AIMS. For the criterion space, one needs to consider student behaviours beyond grades and level of educational attainment, and include what the student does among other things outside of the classroom. For the predictor space, it is possible to bring some order to the myriad personality constructs that have been developed over the last century, by focusing on common variance among personality and other non-ability traits. METHODS. We review these conceptual issues and several empirical studies. CONCLUSIONS. We demonstrate the possible increments in understanding non-ability determinants of academic achievement that may be obtained by focusing on areas where there is a theoretical convergence between predictor and criterion spaces. 2010 The British Psychological Society.
Quantitative assessment of mineral resources with an application to petroleum geology
Harff, Jan; Davis, J.C.; Olea, R.A.
1992-01-01
The probability of occurrence of natural resources, such as petroleum deposits, can be assessed by a combination of multivariate statistical and geostatistical techniques. The area of study is partitioned into regions that are as homogeneous as possible internally while simultaneously as distinct as possible. Fisher's discriminant criterion is used to select geological variables that best distinguish productive from nonproductive localities, based on a sample of previously drilled exploratory wells. On the basis of these geological variables, each wildcat well is assigned to the production class (dry or producer in the two-class case) for which the Mahalanobis' distance from the observation to the class centroid is a minimum. Universal kriging is used to interpolate values of the Mahalanobis' distances to all locations not yet drilled. The probability that an undrilled locality belongs to the productive class can be found, using the kriging estimation variances to assess the probability of misclassification. Finally, Bayes' relationship can be used to determine the probability that an undrilled location will be a discovery, regardless of the production class in which it is placed. The method is illustrated with a study of oil prospects in the Lansing/Kansas City interval of western Kansas, using geological variables derived from well logs. ?? 1992 Oxford University Press.
Roton Minimum as a Fingerprint of Magnon-Higgs Scattering in Ordered Quantum Antiferromagnets.
Powalski, M; Uhrig, G S; Schmidt, K P
2015-11-13
A quantitative description of magnons in long-range ordered quantum antiferromagnets is presented which is consistent from low to high energies. It is illustrated for the generic S=1/2 Heisenberg model on the square lattice. The approach is based on a continuous similarity transformation in momentum space using the scaling dimension as the truncation criterion. Evidence is found for significant magnon-magnon attraction inducing a Higgs resonance. The high-energy roton minimum in the magnon dispersion appears to be induced by strong magnon-Higgs scattering.
Minimum Bayes risk image correlation
NASA Technical Reports Server (NTRS)
Minter, T. C., Jr.
1980-01-01
In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.
46 CFR 173.095 - Towline pull criterion.
Code of Federal Regulations, 2010 CFR
2010-10-01
... diameter in feet (meters). s=that fraction of the propeller circle cylinder which would be intercepted by... shaft centerline at rudder to towing bitts in feet (meters). Δ=displacement in long tons (metric tons). f=minimum freeboard along the length of the vessel in feet (meters). B=molded beam in feet (meters...
What Is the Minimum Information Needed to Estimate Average Treatment Effects in Education RCTs?
ERIC Educational Resources Information Center
Schochet, Peter Z.
2014-01-01
Randomized controlled trials (RCTs) are considered the "gold standard" for evaluating an intervention's effectiveness. Recently, the federal government has placed increased emphasis on the use of opportunistic experiments. A key criterion for conducting opportunistic experiments, however, is that there is relatively easy access to data…
Volcano plots in analyzing differential expressions with mRNA microarrays.
Li, Wentian
2012-12-01
A volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log(10)(p-value) from the t-test). We review the basic and interactive use of the volcano plot and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide a unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility of applying volcano plots to other fields beyond microarray.
Efficient prediction designs for random fields.
Müller, Werner G; Pronzato, Luc; Rendas, Joao; Waldl, Helmut
2015-03-01
For estimation and predictions of random fields, it is increasingly acknowledged that the kriging variance may be a poor representative of true uncertainty. Experimental designs based on more elaborate criteria that are appropriate for empirical kriging (EK) are then often non-space-filling and very costly to determine. In this paper, we investigate the possibility of using a compound criterion inspired by an equivalence theorem type relation to build designs quasi-optimal for the EK variance when space-filling designs become unsuitable. Two algorithms are proposed, one relying on stochastic optimization to explicitly identify the Pareto front, whereas the second uses the surrogate criteria as local heuristic to choose the points at which the (costly) true EK variance is effectively computed. We illustrate the performance of the algorithms presented on both a simple simulated example and a real oceanographic dataset. © 2014 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.
System level analysis and control of manufacturing process variation
Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.
2005-05-31
A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.
The importance of personality and parental styles on optimism in adolescents.
Zanon, Cristian; Bastianello, Micheline Roat; Pacico, Juliana Cerentini; Hutz, Claudio Simon
2014-01-01
Some studies have suggested that personality factors are important to optimism development. Others have emphasized that family relations are relevant variables to optimism. This study aimed to evaluate the importance of parenting styles to optimism controlling for the variance accounted for by personality factors. Participants were 344 Brazilian high school students (44% male) with mean age of 16.2 years (SD = 1) who answered personality, optimism, responsiveness and demandingness scales. Hierarchical regression analyses were conducted having personality factors (in the first step) and maternal and paternal parenting styles, and demandingness and responsiveness (in the second step) as predictive variables and optimism as the criterion. Personality factors, especially neuroticism (β = -.34, p < .01), extraversion (β = .26, p < .01) and agreeableness (β = .16, p < .01), accounted for 34% of the optimism variance and insignificant variance was predicted exclusively by parental styles (1%). These findings suggest that personality is more important to optimism development than parental styles.
Feasibility of digital image colorimetry--application for water calcium hardness determination.
Lopez-Molinero, Angel; Tejedor Cubero, Valle; Domingo Irigoyen, Rosa; Sipiera Piazuelo, Daniel
2013-01-15
Interpretation and relevance of basic RGB colors in Digital Image-Based Colorimetry have been treated in this paper. The studies were carried out using the chromogenic model formed by the reaction between Ca(II) ions and glyoxal bis(2-hydroxyanil). It produced orange-red colored solutions in alkaline media. Individual basic color data (RGB) and also the total intensity of colors, I(tot), were the original variables treated by Factorial Analysis. Te evaluation evidenced that the highest variance of the system and the highest analytical sensitivity were associated to the G color. However, after the study by Fourier transform the basic R color was recognized as an important feature in the information. It was manifested as an intrinsic characteristic that appeared differentiated in terms of low frequency in Fourier transform. The Principal Components Analysis study showed that the variance of the system could be mostly retained in the first principal component, but was dependent on all basic colors. The colored complex was also applied and validated as a Digital Image Colorimetric method for the determination of Ca(II) ions. RGB intensities were linearly correlated with Ca(II) in the range 0.2-2.0 mg L(-1). In the best conditions, using green color, a simple and reliable method for Ca determination could be developed. Its detection limit was established (criterion 3s) as 0.07 mg L(-1). And the reproducibility was lower than 6%, for 1.0 mg L(-1) Ca. Other chromatic parameters were evaluated as dependent calibration variables. Their representativeness, variance and sensitivity were discussed in order to select the best analytical variable. The potentiality of the procedure as a field and ready-to-use method, susceptible to be applied 'in situ' with a minimum of experimental needs, was probed. Applications of the analysis of Ca in different real water samples were carried out. Water of the city net, mineral bottled, and natural-river were analyzed and results were compared and evaluated statistically. The validity was assessed by the alternative techniques of flame atomic absorption spectroscopy and titrimetry. Differences were appreciated but they were consistent with the applied methods. Copyright © 2012 Elsevier B.V. All rights reserved.
Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136
Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.
Destructive examination of shipping package 9975-02644
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daugherty, W. L.
Destructive and non-destructive examinations have been performed on the components of shipping package 9975-02644 as part of a comprehensive SRS surveillance program for plutonium material stored in the K-Area Complex (KAC). During the field surveillance inspection of this package in KAC, three non-conforming conditions were noted: the axial gap of 1.389 inch exceeded the 1 inch maximum criterion, the exposed height of the lead shield was greater than the 4.65 inch maximum criterion, and the difference between the upper assembly inside height and the exposed height of the lead shield was less than the 0.425 inch minimum criterion. All threemore » of these observations relate to axial shrinkage of the lower fiberboard assembly. In addition, liquid water (condensation) was observed on the interior of the drum lid, the thermal blanket and the air shield.« less
On thermonuclear ignition criterion at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Baolian; Kwan, Thomas J. T.; Wang, Yi-Ming
2014-10-15
Sustained thermonuclear fusion at the National Ignition Facility remains elusive. Although recent experiments approached or exceeded the anticipated ignition thresholds, the nuclear performance of the laser-driven capsules was well below predictions in terms of energy and neutron production. Such discrepancies between expectations and reality motivate a reassessment of the physics of ignition. We have developed a predictive analytical model from fundamental physics principles. Based on the model, we obtained a general thermonuclear ignition criterion in terms of the areal density and temperature of the hot fuel. This newly derived ignition threshold and its alternative forms explicitly show the minimum requirementsmore » of the hot fuel pressure, mass, areal density, and burn fraction for achieving ignition. Comparison of our criterion with existing theories, simulations, and the experimental data shows that our ignition threshold is more stringent than those in the existing literature and that our results are consistent with the experiments.« less
Statistical properties of several models of fractional random point processes
NASA Astrophysics Data System (ADS)
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
The Unobtrusive Measurement of Racial Bias Among Recruit Classification Specialists
1974-10-01
Sattler, J. M. Racial "experimenter effects" in experimentation, testing , interviewing, and psychotherapy. Psychological Bulletin, 1970, 73...16 5 Analyses of Variance of Mean Test Scores (GCT + ARI) of Black and White Recruits Seen by Each Classifier 17 6 Average Criterion Scores... test scores and experiences equivalent to those interviewed by black classifiers. If these assumptions can be verified, several interesting
25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...
Validity and reliability of the Brazilian version of the Work Ability Index questionnaire.
Martinez, Maria Carmen; Latorre, Maria do Rosário Dias de Oliveira; Fischer, Frida Marina
2009-06-01
To evaluate the validity and reliability of the Portuguese language version of a work ability index. Cross sectional survey of a sample of 475 workers from an electrical company in the state of Sao Paulo, Southeastern Brazil (spread across ten municipalities in the Campinas area), carried out in 2005. The following aspects of the Brazilian version of the Work Ability Index were evaluated: construct validity, using factorial exploratory analysis, and discriminant capacity, by comparing mean Work Ability Index scores in two groups with different absenteeism levels; criterion validity, by determining the correlation between self-reported health and Work Ability Index score; and reliability, using Cronbach's alpha to determine the internal consistency of the questionnaire. Factorial analysis indicated three factors in the work ability construct: issues pertaining to 'mental resources' (20.6% of the variance), self-perceived work ability (18.9% of the variance), and presence of diseases and health-related limitations (18.4% of the variance). The index was capable of discriminating workers according to levels of absenteeism, identifying a significantly lower (p<0.0001) mean score among subjects with high absenteeism (37.2 points) when compared to those with low absenteeism (42.3 points). Criterion validity analysis showed a correlation between the index and all dimensions of health status analyzed (p<0.0001). Reliability of the index was high, with a Cronbach's alpha of 0.72. The Brazilian version of the Work Ability Index showed satisfactory psychometric properties with respect to construct validity, thus constituting an appropriate option for evaluating work ability in both individual and population-based settings.
A test of source-surface model predictions of heliospheric current sheet inclination
NASA Technical Reports Server (NTRS)
Burton, M. E.; Crooker, N. U.; Siscoe, G. L.; Smith, E. J.
1994-01-01
The orientation of the heliospheric current sheet predicted from a source surface model is compared with the orientation determined from minimum-variance analysis of International Sun-Earth Explorer (ISEE) 3 magnetic field data at 1 AU near solar maximum. Of the 37 cases analyzed, 28 have minimum variance normals that lie orthogonal to the predicted Parker spiral direction. For these cases, the correlation coefficient between the predicted and measured inclinations is 0.6. However, for the subset of 14 cases for which transient signatures (either interplanetary shocks or bidirectional electrons) are absent, the agreement in inclinations improves dramatically, with a correlation coefficient of 0.96. These results validate not only the use of the source surface model as a predictor but also the previously questioned usefulness of minimum variance analysis across complex sector boundaries. In addition, the results imply that interplanetary dynamics have little effect on current sheet inclination at 1 AU. The dependence of the correlation on transient occurrence suggests that the leading edge of a coronal mass ejection (CME), where transient signatures are detected, disrupts the heliospheric current sheet but that the sheet re-forms between the trailing legs of the CME. In this way the global structure of the heliosphere, reflected both in the source surface maps and in the interplanetary sector structure, can be maintained even when the CME occurrence rate is high.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
"Good Work Awards:" Effects on Children's Families. Technical Report #12.
ERIC Educational Resources Information Center
Chun, Sherlyn; Mays, Violet
This brief report describes parental reaction to a reinforcement strategy used with children in the Kamehameha Early Education Program (KEEP). Staff members report that "Good Work Awards" (GWAs) are viewed favorably by mothers of students. GWAs are dittoed notes sent home with children when they have met a minimum criterion for daily…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-10
... revise the minimum Emergency Diesel Generator (EDG) output voltage acceptance criterion in Surveillance... ensures the timely transfer of plant safety system loads to the Emergency Diesel Generators in the event a... from the emergency diesel generators in a timely manner. This change is needed to bring Fermi 2 into...
Family Living and Parenthood. Performance Objectives and Criterion-Referenced Test Items.
ERIC Educational Resources Information Center
Missouri Univ., Columbia. Instructional Materials Lab.
This guide was developed to assist home economics teachers in implementing the Missouri Vocational Instructional Management System into the home economics curriculum at the local level through a family living and parenthood semester course. The course contains a minimum of two performance objectives for each competency developed and validated by…
Vegetation greenness impacts on maximum and minimum temperatures in northeast Colorado
Hanamean, J. R.; Pielke, R.A.; Castro, C. L.; Ojima, D.S.; Reed, Bradley C.; Gao, Z.
2003-01-01
The impact of vegetation on the microclimate has not been adequately considered in the analysis of temperature forecasting and modelling. To fill part of this gap, the following study was undertaken.A daily 850–700 mb layer mean temperature, computed from the National Center for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis, and satellite-derived greenness values, as defined by NDVI (Normalised Difference Vegetation Index), were correlated with surface maximum and minimum temperatures at six sites in northeast Colorado for the years 1989–98. The NDVI values, representing landscape greenness, act as a proxy for latent heat partitioning via transpiration. These sites encompass a wide array of environments, from irrigated-urban to short-grass prairie. The explained variance (r2 value) of surface maximum and minimum temperature by only the 850–700 mb layer mean temperature was subtracted from the corresponding explained variance by the 850–700 mb layer mean temperature and NDVI values. The subtraction shows that by including NDVI values in the analysis, the r2 values, and thus the degree of explanation of the surface temperatures, increase by a mean of 6% for the maxima and 8% for the minima over the period March–October. At most sites, there is a seasonal dependence in the explained variance of the maximum temperatures because of the seasonal cycle of plant growth and senescence. Between individual sites, the highest increase in explained variance occurred at the site with the least amount of anthropogenic influence. This work suggests the vegetation state needs to be included as a factor in surface temperature forecasting, numerical modeling, and climate change assessments.
Woods, Carl T; Keller, Brad S; McKeown, Ian; Robertson, Sam
2016-09-01
Woods, CT, Keller, BS, McKeown, I, and Robertson, S. A comparison of athletic movement among talent-identified juniors from different football codes in Australia: implications for talent development. J Strength Cond Res 30(9): 2440-2445, 2016-This study aimed to compare the athletic movement skill of talent-identified (TID) junior Australian Rules football (ARF) and soccer players. The athletic movement skill of 17 TID junior ARF players (17.5-18.3 years) was compared against 17 TID junior soccer players (17.9-18.7 years). Players in both groups were members of an elite junior talent development program within their respective football codes. All players performed an athletic movement assessment that included an overhead squat, double lunge, single-leg Romanian deadlift (both movements performed on right and left legs), a push-up, and a chin-up. Each movement was scored across 3 essential assessment criteria using a 3-point scale. The total score for each movement (maximum of 9) and the overall total score (maximum of 63) were used as the criterion variables for analysis. A multivariate analysis of variance tested the main effect of football code (2 levels) on the criterion variables, whereas a 1-way analysis of variance identified where differences occurred. A significant effect was noted, with the TID junior ARF players outscoring their soccer counterparts when performing the overhead squat and push-up. No other criterions significantly differed according to the main effect. Practitioners should be aware that specific sporting requirements may incur slight differences in athletic movement skill among TID juniors from different football codes. However, given the low athletic movement skill noted in both football codes, developmental coaches should address the underlying movement skill capabilities of juniors when prescribing physical training in both codes.
Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region
NASA Astrophysics Data System (ADS)
Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.
2005-08-01
Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.
Nelson, Lindsay D.; Patrick, Christopher J.; Bernat, Edward M.
2010-01-01
The externalizing dimension is viewed as a broad dispositional factor underlying risk for numerous disinhibitory disorders. Prior work has documented deficits in event-related brain potential (ERP) responses in individuals prone to externalizing problems. Here, we constructed a direct physiological index of externalizing vulnerability from three ERP indicators and evaluated its validity in relation to criterion measures in two distinct domains: psychometric and physiological. The index was derived from three ERP measures that covaried in their relations with externalizing proneness the error-related negativity and two variants of the P3. Scores on this ERP composite predicted psychometric criterion variables and accounted for externalizing-related variance in P3 response from a separate task. These findings illustrate how a diagnostic construct can be operationalized as a composite (multivariate) psychophysiological variable (phenotype). PMID:20573054
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2001-01-01
The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.
Signal detection with criterion noise: applications to recognition memory.
Benjamin, Aaron S; Diaz, Michael; Wee, Serena
2009-01-01
A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of discrimination incorporating criterion variability are derived, and the model's relationship with extant models of decision making in discrimination tasks is examined. An experiment evaluating recognition memory for ensembles of word stimuli revealed that criterion noise is not trivial in magnitude and contributes substantially to variance in the slope of the isosensitivity function. The authors discuss how ND-TSD can help explain a number of current and historical puzzles in recognition memory, including the inconsistent relationship between manipulations of learning and the isosensitivity function's slope, the lack of invariance of the slope with manipulations of bias or payoffs, the effects of aging on the decision-making process in recognition, and the nature of responding in remember-know decision tasks. ND-TSD poses novel, theoretically meaningful constraints on theories of recognition and decision making more generally, and provides a mechanism for rapprochement between theories of decision making that employ deterministic response rules and those that postulate probabilistic response rules.
Role of optimization criterion in static asymmetric analysis of lumbar spine load.
Daniel, Matej
2011-10-01
A common method for load estimation in biomechanics is the inverse dynamics optimization, where the muscle activation pattern is found by minimizing or maximizing the optimization criterion. It has been shown that various optimization criteria predict remarkably similar muscle activation pattern and intra-articular contact forces during leg motion. The aim of this paper is to study the effect of the choice of optimization criterion on L4/L5 loading during static asymmetric loading. Upright standing with weight in one stretched arm was taken as a representative position. Musculoskeletal model of lumbar spine model was created from CT images of Visible Human Project. Several criteria were tested based on the minimization of muscle forces, muscle stresses, and spinal load. All criteria provide the same level of lumbar spine loading (difference is below 25%), except the criterion of minimum lumbar shear force which predicts unrealistically high spinal load and should not be considered further. Estimated spinal load and predicted muscle force activation pattern are in accordance with the intradiscal pressure measurements and EMG measurements. The L4/L5 spine loads 1312 N, 1674 N, and 1993 N were predicted for mass of weight in hand 2, 5, and 8 kg, respectively using criterion of mininum muscle stress cubed. As the optimization criteria do not considerably affect the spinal load, their choice is not critical in further clinical or ergonomic studies and computationally simpler criterion can be used.
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Development and Initial Validation of the Multicultural Personality Inventory (MPI).
Ponterotto, Joseph G; Fietzer, Alexander W; Fingerhut, Esther C; Woerner, Scott; Stack, Lauren; Magaldi-Dopman, Danielle; Rust, Jonathan; Nakao, Gen; Tsai, Yu-Ting; Black, Natasha; Alba, Renaldo; Desai, Miraj; Frazier, Chantel; LaRue, Alyse; Liao, Pei-Wen
2014-01-01
Two studies summarize the development and initial validation of the Multicultural Personality Inventory (MPI). In Study 1, the 115-item prototype MPI was administered to 415 university students where exploratory factor analysis resulted in a 70-item, 7-factor model. In Study 2, the 70-item MPI and theoretically related companion instruments were administered to a multisite sample of 576 university students. Confirmatory factory analysis found the 7-factor structure to be a relatively good fit to the data (Comparative Fit Index =.954; root mean square error of approximation =.057), and MPI factors predicted variance in criterion variables above and beyond the variance accounted for by broad personality traits (i.e., Big Five). Study limitations and directions for further validation research are specified.
Possibility of modifying the growth trajectory in Raeini Cashmere goat.
Ghiasi, Heydar; Mokhtari, M S
2018-03-27
The objective of this study was to investigate the possibility of modifying the growth trajectory in Raeini Cashmere goat breed. In total, 13,193 records on live body weight collected from 4788 Raeini Cashmere goats were used. According to Akanke's information criterion (AIC), the sing-trait random regression model included fourth-order Legendre polynomial for direct and maternal genetic effect; maternal and individual permanent environmental effect was the best model for estimating (co)variance components. The matrices of eigenvectors for (co)variances between random regression coefficients of direct additive genetic were used to calculate eigenfunctions, and different eigenvector indices were also constructed. The obtained results showed that the first eigenvalue explained 79.90% of total genetic variance. Therefore, changing the body weights applying the first eigenfunction will be obtained rapidly. Selection based on the first eigenvector will cause favorable positive genetic gains for all body weight considered from birth to 12 months of age. For modifying the growth trajectory in Raeini Cashmere goat, the selection should be based on the second eigenfunction. The second eigenvalue accounted for 14.41% of total genetic variance for body weights that is low in comparison with genetic variance explained by the first eigenvalue. The complex patterns of genetic change in growth trajectory observed under the third and fourth eigenfunction and low amount of genetic variance explained by the third and fourth eigenvalues.
da Silva, Wanderson Roberto; Dias, Juliana Chioda Ribeiro; Maroco, João; Campos, Juliana Alvares Duarte Bonini
2014-09-01
This study aimed at evaluating the validity, reliability, and factorial invariance of the complete (34-item) and shortened (8-item and 16-item) versions of the Body Shape Questionnaire (BSQ) when applied to Brazilian university students. A total of 739 female students with a mean age of 20.44 (standard deviation=2.45) years participated. Confirmatory factor analysis was conducted to verify the degree to which the one-factor structure satisfies the proposal for the BSQ's expected structure. Two items of the 34-item version were excluded because they had factor weights (λ)<40. All models had adequate convergent validity (average variance extracted=.43-.58; composite reliability=.85-.97) and internal consistency (α=.85-.97). The 8-item B version was considered the best shortened BSQ version (Akaike information criterion=84.07, Bayes information criterion=157.75, Browne-Cudeck criterion=84.46), with strong invariance for independent samples (Δχ(2)λ(7)=5.06, Δχ(2)Cov(8)=5.11, Δχ(2)Res(16)=19.30). Copyright © 2014 Elsevier Ltd. All rights reserved.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.
ERIC Educational Resources Information Center
Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…
[Medical image segmentation based on the minimum variation snake model].
Zhou, Changxiong; Yu, Shenglin
2007-02-01
It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.
Physical employment standards for U.K. fire and rescue service personnel.
Blacker, S D; Rayson, M P; Wilkinson, D M; Carter, J M; Nevill, A M; Richmond, V L
2016-01-01
Evidence-based physical employment standards are vital for recruiting, training and maintaining the operational effectiveness of personnel in physically demanding occupations. (i) Develop criterion tests for in-service physical assessment, which simulate the role-related physical demands of UK fire and rescue service (UK FRS) personnel. (ii) Develop practical physical selection tests for FRS applicants. (iii) Evaluate the validity of the selection tests to predict criterion test performance. Stage 1: we conducted a physical demands analysis involving seven workshops and an expert panel to document the key physical tasks required of UK FRS personnel and to develop 'criterion' and 'selection' tests. Stage 2: we measured the performance of 137 trainee and 50 trained UK FRS personnel on selection, criterion and 'field' measures of aerobic power, strength and body size. Statistical models were developed to predict criterion test performance. Stage 3: matter experts derived minimum performance standards. We developed single person simulations of the key physical tasks required of UK FRS personnel as criterion and selection tests (rural fire, domestic fire, ladder lift, ladder extension, ladder climb, pump assembly, enclosed space search). Selection tests were marginally stronger predictors of criterion test performance (r = 0.88-0.94, 95% Limits of Agreement [LoA] 7.6-14.0%) than field test scores (r = 0.84-0.94, 95% LoA 8.0-19.8%) and offered greater face and content validity and more practical implementation. This study outlines the development of role-related, gender-free physical employment tests for the UK FRS, which conform to equal opportunities law. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The double high tide at Port Ellen: Doodson's criterion revisited
NASA Astrophysics Data System (ADS)
Byrne, Hannah A. M.; Mattias Green, J. A.; Bowers, David G.
2017-07-01
Doodson proposed a minimum criterion to predict the occurrence of double high (or double low) waters when a higher-frequency tidal harmonic is added to the semi-diurnal tide. If the phasing of the harmonic is optimal, the condition for a double high water can be written bn2/a > 1 where b is the amplitude of the higher harmonic, a is the amplitude of the semi-diurnal tide, and n is the ratio of their frequencies. Here we expand this criterion to allow for (i) a phase difference ϕ between the semi-diurnal tide and the harmonic and (ii) the fact that the double high water will disappear in the event that b/a becomes large enough for the higher harmonic to be the dominant component of the tide. This can happen, for example, at places or times where the semi-diurnal tide is very small. The revised parameter is br2/a, where r is a number generally less than n, although equal to n when ϕ = 0. The theory predicts that a double high tide will form when this parameter exceeds 1 and then disappear when it exceeds a value of order n2 and the higher harmonic becomes dominant. We test these predictions against observations at Port Ellen in the Inner Hebrides of Scotland. For most of the data set, the largest harmonic of the semi-diurnal tide is the sixth diurnal component, for which n = 3. The principal lunar and solar semi-diurnal tides are about equal at Port Ellen and so the semi-diurnal tide becomes very small twice a month at neap tides (here defined as the smallest fortnightly tidal range). A double high water forms when br2/a first exceeds a minimum value of about 1.5 as neap tides are approached and then disappears as br2/a then exceeds a second limiting value of about 10 at neap tides in agreement with the revised criterion.
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
NASA Astrophysics Data System (ADS)
Nha, Hyunchul; Kim, Jaewan
2006-07-01
We derive a class of inequalities, from the uncertainty relations of the su(1,1) and the su(2) algebra in conjunction with partial transposition, that must be satisfied by any separable two-mode states. These inequalities are presented in terms of the su(2) operators Jx=(a†b+ab†)/2 , Jy=(a†b-ab†)/2i , and the total photon number ⟨Na+Nb⟩ . They include as special cases the inequality derived by Hillery and Zubairy [Phys. Rev. Lett. 96, 050503 (2006)], and the one by Agarwal and Biswas [New J. Phys. 7, 211 (2005)]. In particular, optimization over the whole inequalities leads to the criterion obtained by Agarwal and Biswas. We show that this optimal criterion can detect entanglement for a broad class of non-Gaussian entangled states, i.e., the su(2) minimum-uncertainty states. Experimental schemes to test the optimal criterion are also discussed, especially the one using linear optical devices and photodetectors.
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
On the design of classifiers for crop inventories
NASA Technical Reports Server (NTRS)
Heydorn, R. P.; Takacs, H. C.
1986-01-01
Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.
Influence of the geomembrane on time-lapse ERT measurements for leachate injection monitoring.
Audebert, M; Clément, R; Grossin-Debattista, J; Günther, T; Touze-Foltz, N; Moreau, S
2014-04-01
Leachate recirculation is a key process in the operation of municipal waste landfills as bioreactors. To quantify the water content and to evaluate the leachate injection system, in situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). However, this method can present false variations in the observations due to several parameters. This study investigates the impact of the geomembrane on ERT measurements. Indeed, the geomembrane tends to be ignored in the inversion process in most previously conducted studies. The presence of the geomembrane can change the boundary conditions of the inversion models, which have classically infinite boundary conditions. Using a numerical modelling approach, the authors demonstrate that a minimum distance is required between the electrode line and the geomembrane to satisfy the good conditions of use of the classical inversion tools. This distance is a function of the electrode line length (i.e. of the unit electrode spacing) used, the array type and the orientation of the electrode line. Moreover, this study shows that if this criterion on the minimum distance is not satisfied, it is possible to significantly improve the inversion process by introducing the complex geometry and the geomembrane location into the inversion tools. These results are finally validated on a field data set gathered on a small municipal solid waste landfill cell where this minimum distance criterion cannot be satisfied. Copyright © 2014 Elsevier Ltd. All rights reserved.
Is my study system good enough? A case study for identifying maternal effects.
Holand, Anna Marie; Steinsland, Ingelin
2016-06-01
In this paper, we demonstrate how simulation studies can be used to answer questions about identifiability and consequences of omitting effects from a model. The methodology is presented through a case study where identifiability of genetic and/or individual (environmental) maternal effects is explored. Our study system is a wild house sparrow ( Passer domesticus ) population with known pedigree. We fit pedigree-based (generalized) linear mixed models (animal models), with and without additive genetic and individual maternal effects, and use deviance information criterion (DIC) for choosing between these models. Pedigree and R-code for simulations are available. For this study system, the simulation studies show that only large maternal effects can be identified. The genetic maternal effect (and similar for individual maternal effect) has to be at least half of the total genetic variance to be identified. The consequences of omitting a maternal effect when it is present are explored. Our results indicate that the total (genetic and individual) variance are accounted for. When an individual (environmental) maternal effect is omitted from the model, this only influences the estimated (direct) individual (environmental) variance. When a genetic maternal effect is omitted from the model, both (direct) genetic and (direct) individual variance estimates are overestimated.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.
External Catalyst Breakup Phenomena
1976-06-01
catalyst particle can cause high internal pressures which result in particle destruction. Analytical results suggest rhat erosion effects from solid...mechanisms. * Pressure Forces. High G loadings and bed pressure drops should be avoided. Bed pre-loads should be kept at a minimum value. Thruster...5.2.7.1 Failure Theories ............................ 243 5.2.7.2 Maximum Tension Stress Criterion ............ 244 5.2.7.3 Distortion Energy Approach
The Physiological Profile of Trained Female Dance Majors.
ERIC Educational Resources Information Center
Rimmer, James H.; And Others
This investigation studied the physiological profiles of eight highly trained female dance majors. To be considered highly trained, each subject had to be dancing a minimum of three hours a day, four to five days a week, for the last year. They also had to meet the criterion of dancing at least ten hours a week for the last five years prior to…
Measurement of academic entitlement.
Miller, Brian K
2013-10-01
Members of Generation Y, or Millennials, have been accused of being lazy, whiny, pampered, and entitled, particularly in the college classroom. Using an equity theory framework, eight items from a measure of work entitlement were adapted to measure academic entitlement in a university setting in three independent samples. In Study 1 (n = 229), confirmatory factor analyses indicated good model fit to a unidimensional structure for the data. In Study 2 (n = 200), the questionnaire predicted unique variance in university satisfaction beyond two more general measures of dispositional entitlement. In Study 3 (n = 161), the measure predicted unique variance in perceptions of grade fairness beyond that which was predicted by another measure of academic entitlement. This analysis provides evidence of discriminant, convergent, incremental, concurrent criterion-related, and construct validity for the Academic Equity Preference Questionnaire.
2011-01-01
Background Since stress is hypothesized to play a role in the etiology of obesity during adolescence, research on associations between adolescent stress and obesity-related parameters and behaviours is essential. Due to lack of a well-established recent stress checklist for use in European adolescents, the study investigated the reliability and validity of the Adolescent Stress Questionnaire (ASQ) for assessing perceived stress in European adolescents. Methods The ASQ was translated into the languages of the participating cities (Ghent, Stockholm, Vienna, Zaragoza, Pecs and Athens) and was implemented within the HELENA cross-sectional study. A total of 1140 European adolescents provided a valid ASQ, comprising 10 component scales, used for internal reliability (Cronbach α) and construct validity (confirmatory factor analysis or CFA). Contributions of socio-demographic (gender, age, pubertal stage, socio-economic status) characteristics to the ASQ score variances were investigated. Two-hundred adolescents also provided valid saliva samples for cortisol analysis to compare with the ASQ scores (criterion validity). Test-retest reliability was investigated using two ASQ assessments from 37 adolescents. Results Cronbach α-values of the ASQ scales (0.57 to 0.88) demonstrated a moderate internal reliability of the ASQ, and intraclass correlation coefficients (0.45 to 0.84) established an insufficient test-retest reliability of the ASQ. The adolescents' gender (girls had higher stress scores than boys) and pubertal stage (those in a post-pubertal development had higher stress scores than others) significantly contributed to the variance in ASQ scores, while their age and socio-economic status did not. CFA results showed that the original scale construct fitted moderately with the data in our European adolescent population. Only in boys, four out of 10 ASQ scale scores were a significant positive predictor for baseline wake-up salivary cortisol, suggesting a rather poor criterion validity of the ASQ, especially in girls. Conclusions In our European adolescent sample, the ASQ had an acceptable internal reliability and construct validity and the adolescents' gender and pubertal stage systematically contributed to the ASQ variance, but its test-retest reliability and criterion validity were rather poor. Overall, the utility of the ASQ for assessing perceived stress in adolescents across Europe is uncertain and some aspects require further examination. PMID:21943341
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
Training set optimization under population structure in genomic selection.
Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E
2015-01-01
Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.
Hill, B D; Elliott, Emily M; Shelton, Jill T; Pella, Russell D; O'Jile, Judith R; Gouvier, W Drew
2010-03-01
Working memory is the cognitive ability to hold a discrete amount of information in mind in an accessible state for utilization in mental tasks. This cognitive ability is impaired in many clinical populations typically assessed by clinical neuropsychologists. Recently, there have been a number of theoretical shifts in the way that working memory is conceptualized and assessed in the experimental literature. This study sought to determine to what extent the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) Working Memory Index (WMI) measures the construct studied in the cognitive working memory literature, whether an improved WMI could be derived from the subtests that comprise the WAIS-III, and what percentage of variance in individual WAIS-III subtests is explained by working memory. It was hypothesized that subtests beyond those currently used to form the WAIS-III WMI would be able to account for a greater percentage of variance in a working memory criterion construct than the current WMI. Multiple regression analyses (n = 180) revealed that the best predictor model of subtests for assessing working memory was composed of the Digit Span, Letter-Number Sequencing, Matrix Reasoning, and Vocabulary. The Arithmetic subtest was not a significant contributor to the model. These results are discussed in the context of how they relate to Unsworth and Engle's (2006, 2007) new conceptualization of working memory mechanisms.
Minimum-variance Brownian motion control of an optically trapped probe.
Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang
2009-10-20
This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
River meanders - Theory of minimum variance
Langbein, Walter Basil; Leopold, Luna Bergere
1966-01-01
Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Charged particle tracking at Titan, and further applications
NASA Astrophysics Data System (ADS)
Bebesi, Zsofia; Erdos, Geza; Szego, Karoly
2016-04-01
We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.
Microstructure of the IMF turbulences at 2.5 AU
NASA Technical Reports Server (NTRS)
Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.
1995-01-01
A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.
Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.
2009-02-01
A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.
Andrews, Arthur R.; Bridges, Ana J.; Gomez, Debbie
2014-01-01
Purpose The aims of the study were to evaluate the orthogonality of acculturation for Latinos. Design Regression analyses were used to examine acculturation in two Latino samples (N = 77; N = 40). In a third study (N = 673), confirmatory factor analyses compared unidimensional and bidimensional models. Method Acculturation was assessed with the ARSMA-II (Studies 1 and 2), and language proficiency items from the Children of Immigrants Longitudinal Study (Study 3). Results In Studies 1 and 2, the bidimensional model accounted for slightly more variance (R2Study 1 = .11; R2Study 2 = .21) than the unidimensional model (R2Study 1 = .10; R2Study 2 = .19). In Study 3, the bidimensional model evidenced better fit (Akaike information criterion = 167.36) than the unidimensional model (Akaike information criterion = 1204.92). Discussion/Conclusions Acculturation is multidimensional. Implications for Practice Care providers should examine acculturation as a bidimensional construct. PMID:23361579
Survey of Noncommissioned Officer Academies for Criterion Development Purposes,
1961-12-01
Inspection, Fitting and Wearing of the Uniform, Ceremonies, Customs and Courtesies, Conduct of Physical Training Program, etc. )--minimum of 15 hours. 3...in a course and covers the general responsibilities of leadership, problems of leader- subordinate relationships , and some of the leader’s specific...OPERATION AT INSTALLATIONS SURVEYED 3Y DA MILITARY PERSONNEL MANAGMNT TEAMS Type of Training Program Installation Refresher Leadership Instructor
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
Wiley, Jeffrey B.
2006-01-01
Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.
NASA Astrophysics Data System (ADS)
Shinjo, Ami; Hashiyama, Naoyuki; Koshio, Akane; Eto, Yujiro; Hirano, Takuya
2016-10-01
The continuous-variable (CV) Einstein-Podolsky-Rosen (EPR) paradox and steering are demonstrated using a pulsed light source and waveguides. We shorten the duration of the local oscillator (LO) pulse by using parametric amplification to improve the temporal mode-matching between the entangled pulse and the LO pulse. After correcting for the amplifier noise, the product of the measured conditional variance of the quadrature-phase amplitudes is 0.74 < 1, which satisfies the EPR-Reid criterion.
1994-03-01
criterion-related validity studies conducted between 1960 and 1984. Results showed that temperament constructs predict multiple components of job...interpretable factors that accounted for 65 percent of the variance. The first seven rotated factors from the item-level analyses roughly corresponded to...developed for use as a training aid. Using a demilitarized M16 rifle, the The text of this section is adapted from the Project A Final Report (ARI
Flat-fielding of Solar Hα Observations Based on the Maximum Correntropy Criterion
NASA Astrophysics Data System (ADS)
Xu, Gao-Gui; Zheng, Sheng; Lin, Gang-Hua; Wang, Xiao-Fan
2016-08-01
The flat-field CCD calibration method of Kuhn et al. (KLL) is an efficient method for flat-fielding. However, since it depends on the minimum of the sum of squares error (SSE), its solution is sensitive to noise, especially non-Gaussian noise. In this paper, a new algorithm is proposed to determine the flat field. The idea is to change the criterion of gain estimate from SSE to the maximum correntropy. The result of a test on simulated data demonstrates that our method has a higher accuracy and a faster convergence than KLL’s and Chae’s. It has been found that the method effectively suppresses noise, especially in the case of typical non-Gaussian noise. And the computing time of our algorithm is the shortest.
Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints
NASA Astrophysics Data System (ADS)
Yan, Wei
2012-01-01
An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.
Beltukov, Y M; Fusco, C; Parshin, D A; Tanguy, A
2016-02-01
The vibrational properties of model amorphous materials are studied by combining complete analysis of the vibration modes, dynamical structure factor, and energy diffusivity with exact diagonalization of the dynamical matrix and the kernel polynomial method, which allows a study of very large system sizes. Different materials are studied that differ only by the bending rigidity of the interactions in a Stillinger-Weber modelization used to describe amorphous silicon. The local bending rigidity can thus be used as a control parameter, to tune the sound velocity together with local bonds directionality. It is shown that for all the systems studied, the upper limit of the Boson peak corresponds to the Ioffe-Regel criterion for transverse waves, as well as to a minimum of the diffusivity. The Boson peak is followed by a diffusivity's increase supported by longitudinal phonons. The Ioffe-Regel criterion for transverse waves corresponds to a common characteristic mean-free path of 5-7 Å (which is slightly bigger for longitudinal phonons), while the fine structure of the vibrational density of states is shown to be sensitive to the local bending rigidity.
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Simple Criterion to Estimate Performance of Pulse Jet Mixed Vessels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pease, Leonard F.; Bamberger, Judith A.; Mahoney, Lenna A.
Pulse jet mixed process vessels comprise a key element of the U.S. Department of Energy’s strategy to process millions of gallons of legacy nuclear waste slurries. Slurry suctioned into a pulse jet mixer (PJM) tube at the end of one pulse is pneumatically driven from the PJM toward the bottom of the vessel at the beginning of the next pulse, forming a jet. The jet front traverses the distance from nozzle outlet to the bottom of the vessel and spreads out radially. Varying numbers of PJMs are typically arranged in a ring configuration within the vessel at a selected radiusmore » and operated concurrently. Centrally directed radial flows from neighboring jets collide to create a central upwell that elevates the solids in the center of the vessel when the PJM tubes expel their contents. An essential goal of PJM operation is to elevate solids to the liquid surface to minimize stratification. Solids stratification may adversely affect throughput of the waste processing plant. Unacceptably high slurry densities at the base of the vessel may plug the pipeline through which the slurry exits the vessel. Additionally, chemical reactions required for processing may not achieve complete conversion. To avoid these conditions, a means of predicting the elevation to which the solids rise in the central upwell that can be used during vessel design remains essential. In this paper we present a simple criterion to evaluate the extent of solids elevation achieved by a turbulent upwell jet. The criterion asserts that at any location in the central upwell the local velocity must be in excess of a cutoff velocity to remain turbulent. We find that local velocities in excess of 0.6 m/s are necessary for turbulent jet flow through both Newtonian and yield stress slurries. By coupling this criterion with the free jet velocity equation relating the local velocity to elevation in the central upwell, we estimate the elevation at which turbulence fails, and consequently the elevation at which the upwell fails to further lift the slurry. Comparing this elevation to the vessel fill level predicts whether the jet flow will achieve the full vertical extent of the vessel at the center. This simple local-velocity criterion determines a minimum PJM nozzle velocity at which the full vertical extent of the central upwell in PJM vessels will be turbulent. The criterion determines a minimum because flow in regions peripheral to the central upwelling jet may not be turbulent, even when the center of the vessel in the upwell is turbulent, if the jet pulse duration is too short. The local-velocity criterion ensures only that there is sufficient wherewithal for the turbulent jet flow to drive solids to the surface in the center of the vessel in the central upwell.« less
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Guidance strategies and analysis for low thrust navigation
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1973-01-01
A low-thrust guidance algorithm suitable for operational use was formulated. A constrained linear feedback control law was obtained using a minimum terminal miss criterion and restricting control corrections to constant changes for specified time periods. Both fixed- and variable-time-of-arrival guidance were considered. The performance of the guidance law was evaluated by applying it to the approach phase of the 1980 rendezvous mission with the comet Encke.
Development and psychometric properties of the Ethics Environment Questionnaire.
McDaniel, C
1997-09-01
The author reports on the development and the psychometric properties of the Ethics Environment Questionnaire (EEQ), an instrument by which to measure the opinions of health-care providers about ethics in their clinical practice organizations. The EEQ was developed to increase the number of valid and reliable measures pertaining to ethics in health-care delivery. The EEQ is a 20-item self-administered questionnaire using a Likert-type 5-point format, offering ease of administration. It is applicable to a cross-section of health-care practitioners and health-care facilities. The mean administration time is 10 minutes. The EEQ represents testing on 450 respondents in acute care settings among a cross-section of acute care facilities. Internal consistency reliability using Cronbach's alpha coefficient is 0.93, and the test-retest reliability is 0.88. Construct, content, and criterion validity are established. The scale is unidimensional, with factor loadings exceeding the minimum preset criterion. Mean score is 3.1 out of 5.0, with scores of 3.5 and above interpreted as reflective of a positive ethics environment. The EEQ provides a measure of ethics in health-care organizations among multi-practitioners in clinical practice on a valid, reliable, cost effective, and easily administered instrument that requires minimum investment of personnel time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, W. Geoffrey; Gray, David Clinton
Purpose: To introduce the Joint Commission's requirements for annual diagnostic physics testing of all nuclear medicine equipment, effective 7/1/2014, and to highlight an acceptable methodology for testing lowcontrast resolution of the nuclear medicine imaging system. Methods: The Joint Commission's required diagnostic physics evaluations are to be conducted for all of the image types produced clinically by each scanner. Other accrediting bodies, such as the ACR and the IAC, have similar imaging metrics, but do not emphasize testing low-contrast resolution as it relates clinically. The proposed method for testing low contrast resolution introduces quantitative metrics that are clinically relevant. The acquisitionmore » protocol and calculation of contrast levels will utilize a modified version of the protocol defined in AAPM Report #52. Results: Using the Rose criterion for lesion detection with a SNRpixel = 4.335 and a CNRlesion = 4, the minimum contrast levels for 25.4 mm and 31.8 mm cold spheres were calculated to be 0.317 and 0.283, respectively. These contrast levels are the minimum threshold that must be attained to guard against false positive lesion detection. Conclusion: Low contrast resolution, or detectability, can be properly tested in a manner that is clinically relevant by measuring the contrast level of cold spheres within a Jaszczak phantom using pixel values within ROI's placed in the background and cold sphere regions. The measured contrast levels are then compared to a minimum threshold calculated using the Rose criterion and a CNRlesion = 4. The measured contrast levels must either meet or exceed this minimum threshold to prove acceptable lesion detectability. This research and development activity was performed by the authors while employed at West Physics Consulting, LLC. It is presented with the consent of West Physics, which has authorized the dissemination of the information and/or techniques described in the work.« less
A statistical study of magnetopause structures: Tangential versus rotational discontinuities
NASA Astrophysics Data System (ADS)
Chou, Y.-C.; Hau, L.-N.
2012-08-01
A statistical study of the structure of Earth's magnetopause is carried out by analyzing two-year AMPTE/IRM plasma and magnetic field data. The analyses are based on the minimum variance analysis (MVA), the deHoffmann-Teller (HT) frame analysis and the Walén relation. A total of 328 magnetopause crossings are identified and error estimates associated with MVA and HT frame analyses are performed for each case. In 142 out of 328 events both MVA and HT frame analyses yield high quality results which are classified as either tangential-discontinuity (TD) or rotational-discontinuity (RD) structures based only on the Walén relation: Events withSWA ≤ 0.4 (SWA ≥ 0.5) are classified as TD (RD), and rest (with 0.4 < SWA < 0.5) is classified as "uncertain," where SWA refers to the Walén slope. With this criterion, 84% of 142 events are TDs, 12% are RDs, and 4% are uncertain events. There are a large portion of TD events which exhibit a finite normal magnetic field component Bnbut have insignificant flow as compared to the Alfvén velocity in the HT frame. Two-dimensional Grad-Shafranov reconstruction of forty selected TD and RD events show that single or multiple X-line accompanied with magnetic islands are common feature of magnetopause current. A survey plot of the HT velocity associated with TD structures projected onto the magnetopause shows that the flow is diverted at the subsolar point and accelerated toward the dawn and dusk flanks.
NASA Astrophysics Data System (ADS)
Rocadenbosch, Francesc; Comeron, Adolfo; Vazquez, Gregori; Rodriguez-Gomez, Alejandro; Soriano, Cecilia; Baldasano, Jose M.
1998-12-01
Up to now, retrieval of the atmospheric extinction and backscatter has mainly relied on standard straightforward non-memory procedures such as slope-method, exponential- curve fitting and Klett's method. Yet, their performance becomes ultimately limited by the inherent lack of adaptability as they only work with present returns and neither past estimations, nor the statistics of the signals or a prior uncertainties are taken into account. In this work, a first inversion of the backscatter and extinction- to-backscatter ratio from pulsed elastic-backscatter lidar returns is tackled by means of an extended Kalman filter (EKF), which overcomes these limitations. Thus, as long as different return signals income,the filter updates itself weighted by the unbalance between the a priori estimates of the optical parameters and the new ones based on a minimum variance criterion. Calibration errors or initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables to retrieve the sought-after optical parameters as time-range-dependent functions and hence, to track the atmospheric evolution, its performance being only limited by the quality and availability of the 'a priori' information and the accuracy of the atmospheric model assumed. The study ends with an encouraging practical inversion of a live-scene measured with the Nd:YAG elastic-backscatter lidar station at our premises in Barcelona.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
The psychometric properties of an Iranian translation of the Work Ability Index (WAI) questionnaire.
Abdolalizadeh, M; Arastoo, A A; Ghsemzadeh, R; Montazeri, A; Ahmadi, K; Azizi, A
2012-09-01
This study was carried out to evaluate the psychometric properties of an Iranian translation of the Work Ability Index (WAI) questionnaire. In this methodological study, nurses and healthcare workers aged 40 years and older who worked in educational hospitals in Ahvaz (236 workers) in 2010, completed the questionnaire and 60 of the workers filled out the WAI questionnaire for the second time to ensure test-retest reliability. Forward-backward method was applied to translate the questionnaire from English into Persian. The psychometric properties of the Iranian translation of the WAI were assessed using the fallowing tests: Internal consistency (to test reliability), test-retest analysis, exploratory factor analysis (construct validity), discriminate validity by comparing the mean WAI score in two groups of the employees that had different levels of sick leave, criterion validity by determining the correlation between the Persian version of short form health survey (SF-36) and WAI score. Cronbach's alpha coefficient was estimated to be 0.79 and it was concluded that the internal consistency was high enough. The intraclass correlation coefficient was recognized to be 0.92. Factor analysis indicated three factors in the structure of the work ability including self-perceived work ability (24.5% of the variance), mental resources (22.23% of the variance), and presence of disease and health related limitation (18.55% of the variance). Statistical tests showed that this questionnaire was capable of discriminating two groups of employees who had different levels of sick leave. Criterion validity analysis showed that this instrument and all dimensions of the Iranian version of SF-36 were correlated significantly. Item correlation corrective for overlap showed the items tests had a good correlation except for one. The finding of the study showed that the Iranian version of the WAI is a reliable and valid measure of work ability and can be used both in research and practical activities.
Measurement of the lowest dosage of phenobarbital that can produce drug discrimination in rats
Overton, Donald A.; Stanwood, Gregg D.; Patel, Bhavesh N.; Pragada, Sreenivasa R.; Gordon, M. Kathleen
2009-01-01
Rationale Accurate measurement of the threshold dosage of phenobarbital that can produce drug discrimination (DD) may improve our understanding of the mechanisms and properties of such discrimination. Objectives Compare three methods for determining the threshold dosage for phenobarbital (D) versus no drug (N) DD. Methods Rats learned a D versus N DD in 2-lever operant training chambers. A titration scheme was employed to increase or decrease dosage at the end of each 18-day block of sessions depending on whether the rat had achieved criterion accuracy during the sessions just completed. Three criterion rules were employed, all based on average percent drug lever responses during initial links of the last 6 D and 6 N sessions of a block. The criteria were: D%>66 and N%<33; D%>50 and N%<50; (D%-N%)>33. Two squads of rats were trained, one immediately after the other. Results All rats discriminated drug versus no drug. In most rats, dosage decreased to low levels and then oscillated near the minimum level required to maintain criterion performance. The lowest discriminated dosage significantly differed under the three criterion rules. The squad that was trained 2nd may have benefited by partially duplicating the lever choices of the previous squad. Conclusions The lowest discriminated dosage is influenced by the criterion of discriminative control that is employed, and is higher than the absolute threshold at which discrimination entirely disappears. Threshold estimations closer to absolute threshold can be obtained when criteria are employed that are permissive, and that allow rats to maintain lever preferences. PMID:19082992
Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mainemer, C. I.
1978-01-01
The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Code of Federal Regulations, 2014 CFR
2014-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Code of Federal Regulations, 2013 CFR
2013-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine
2017-11-16
Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.
Uhler, Kristin M; Baca, Rosalinda; Dudas, Emily; Fredrickson, Tammy
2015-01-01
Speech perception measures have long been considered an integral piece of the audiological assessment battery. Currently, a prelinguistic, standardized measure of speech perception is missing in the clinical assessment battery for infants and young toddlers. Such a measure would allow systematic assessment of speech perception abilities of infants as well as the potential to investigate the impact early identification of hearing loss and early fitting of amplification have on the auditory pathways. To investigate the impact of sensation level (SL) on the ability of infants with normal hearing (NH) to discriminate /a-i/ and /ba-da/ and to determine if performance on the two contrasts are significantly different in predicting the discrimination criterion. The design was based on a survival analysis model for event occurrence and a repeated measures logistic model for binary outcomes. The outcome for survival analysis was the minimum SL for criterion and the outcome for the logistic regression model was the presence/absence of achieving the criterion. Criterion achievement was designated when an infant's proportion correct score was >0.75 on the discrimination performance task. Twenty-two infants with NH sensitivity participated in this study. There were 9 males and 13 females, aged 6-14 mo. Testing took place over two to three sessions. The first session consisted of a hearing test, threshold assessment of the two speech sounds (/a/ and /i/), and if time and attention allowed, visual reinforcement infant speech discrimination (VRISD). The second session consisted of VRISD assessment for the two test contrasts (/a-i/ and /ba-da/). The presentation level started at 50 dBA. If the infant was unable to successfully achieve criterion (>0.75) at 50 dBA, the presentation level was increased to 70 dBA followed by 60 dBA. Data examination included an event analysis, which provided the probability of criterion distribution across SL. The second stage of the analysis was a repeated measures logistic regression where SL and contrast were used to predict the likelihood of speech discrimination criterion. Infants were able to reach criterion for the /a-i/ contrast at statistically lower SLs when compared to /ba-da/. There were six infants who never reached criterion for /ba-da/ and one never reached criterion for /a-i/. The conditional probability of not reaching criterion by 70 dB SL was 0% for /a-i/ and 21% for /ba-da/. The predictive logistic regression model showed that children were more likely to discriminate the /a-i/ even when controlling for SL. Nearly all normal-hearing infants can demonstrate discrimination criterion of a vowel contrast at 60 dB SL, while a level of ≥70 dB SL may be needed to allow all infants to demonstrate discrimination criterion of a difficult consonant contrast. American Academy of Audiology.
Heritage, Brody; Gilbert, Jessica M.; Roberts, Lynne D.
2016-01-01
Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from 246 Australian university employees (146 academic; 100 professional) was gathered. Our findings indicated that the two compared measures of job embeddedness were convergent when total scale scores were examined. Additionally, job embeddedness was capable of demonstrating criterion and incremental validity, predicting unique variance in turnover intention. However, this finding was not readily apparent with one of the compared job embeddedness measures, which demonstrated comparatively weaker evidence of validity. We discuss the theoretical and applied implications of these findings, noting that job embeddedness has a complementary place among established determinants of turnover intention. PMID:27199817
Somma, Antonella; Borroni, Serena; Maffei, Cesare; Giarolli, Laura E; Markon, Kristian E; Krueger, Robert F; Fossati, Andrea
2017-10-01
In order to assess the reliability, factorial validity, and criterion validity of the Personality Inventory for DSM-5 (PID-5) among adolescents, 1,264 Italian high school students were administered the PID-5. Participants were also administered the Questionnaire on Relationships and Substance Use as a criterion measure. In the full sample, McDonald's ω values were adequate for the PID-5 scales (median ω = .85, SD = .06), except for Suspiciousness. However, all PID-5 scales showed average inter-item correlation values in the .20-.55 range. Exploratory structural equation modeling analyses provided moderate support for the a priori model of PID-5 trait scales. Ordinal logistic regression analyses showed that selected PID-5 trait scales predicted a significant, albeit moderate (Cox & Snell R 2 values ranged from .08 to .15, all ps < .001) amount of variance in Questionnaire on Relationships and Substance Use variables.
Criterion and incremental validity of the emotion regulation questionnaire
Ioannidis, Christos A.; Siegling, A. B.
2015-01-01
Although research on emotion regulation (ER) is developing, little attention has been paid to the predictive power of ER strategies beyond established constructs. The present study examined the incremental validity of the Emotion Regulation Questionnaire (ERQ; Gross and John, 2003), which measures cognitive reappraisal and expressive suppression, over and above the Big Five personality factors. It also extended the evidence for the measure's criterion validity to yet unexamined criteria. A university student sample (N = 203) completed the ERQ, a measure of the Big Five, and relevant cognitive and emotion-laden criteria. Cognitive reappraisal predicted positive affect beyond personality, as well as experiential flexibility and constructive self-assertion beyond personality and affect. Expressive suppression explained incremental variance in negative affect beyond personality and in experiential flexibility beyond personality and general affect. No incremental effects were found for worry, social anxiety, rumination, reflection, and preventing negative emotions. Implications for the construct validity and utility of the ERQ are discussed. PMID:25814967
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
The research of Raman spectra measurement system based on tiled-grating monochromator
NASA Astrophysics Data System (ADS)
Liu, Li-na; Zhang, Yin-chao; Chen, Si-ying; Chen, He; Guo, Pan; Wang, Yuan
2013-09-01
A set of Raman spectrum measurement system, essentially a Raman spectrometer, has been independently designed and accomplished by our research group. This system adopts tiled-grating structure, namely two 50mm × 50mm holographic gratings are tiled to form a big spectral grating. It not only improves the resolution but also reduces the cost. This article outlines the Raman spectroscopy system's composition structure and performance parameters. Then corresponding resolutions of the instrument under different criterions are deduced through experiments and data fitting. The result shows that the system's minimum resolution is up to 0.02nm, equivalent to 0.5cm-1 wavenumber under Rayleigh criterion; and it will be up to 0.007nm, equivalent to 0.19cm-1 wavenumber under Sparrow criterion. Then Raman spectra of CCl4 and alcohol have been obtained by the spectrometer, which agreed with the standard spectrum respectively very well. Finally, we measured the spectra of the alcohol solutions with different concentrations and extracted the intensity of characteristic peaks from smoothed spectra. Linear fitting between intensity of characteristic peaks and alcohol solution concentrations has been made. And the linear correlation coefficient is 0.96.
Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.
ERIC Educational Resources Information Center
Glutting, Joseph J.; McDermott, Paul A.
1990-01-01
Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…
A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)
2010-01-01
processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Husby, Arild; Gustafsson, Lars; Qvarnström, Anna
2012-01-01
The avian incubation period is associated with high energetic costs and mortality risks suggesting that there should be strong selection to reduce the duration to the minimum required for normal offspring development. Although there is much variation in the duration of the incubation period across species, there is also variation within species. It is necessary to estimate to what extent this variation is genetically determined if we want to predict the evolutionary potential of this trait. Here we use a long-term study of collared flycatchers to examine the genetic basis of variation in incubation duration. We demonstrate limited genetic variance as reflected in the low and nonsignificant additive genetic variance, with a corresponding heritability of 0.04 and coefficient of additive genetic variance of 2.16. Any selection acting on incubation duration will therefore be inefficient. To our knowledge, this is the first time heritability of incubation duration has been estimated in a natural bird population. © 2011 by The University of Chicago.
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu
2007-01-01
As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.
A minimum attention control law for ball catching.
Jang, Cheongjae; Lee, Jee-eun; Lee, Sohee; Park, F C
2015-10-06
Digital implementations of control laws typically involve discretization with respect to both time and space, and a control law that can achieve a task at coarser levels of discretization can be said to require less control attention, and also reduced implementation costs. One means of quantitatively capturing the attention of a control law is to measure the rate of change of the control with respect to changes in state and time. In this paper we present an attention-minimizing control law for ball catching and other target tracking tasks based on Brockett's attention criterion. We first highlight the connections between this attention criterion and some well-known principles from human motor control. Under the assumption that the optimal control law is the sum of a linear time-varying feedback term and a time-varying feedforward term, we derive an LQR-based minimum attention tracking control law that is stable, and obtained efficiently via a finite-dimensional optimization over the symmetric positive-definite matrices. Taking ball catching as our primary task, we perform numerical experiments comparing the performance of the various control strategies examined in the paper. Consistent with prevailing theories about human ball catching, our results exhibit several familiar features, e.g., the transition from open-loop to closed-loop control during the catching movement, and improved robustness to spatiotemporal discretization. The presented control laws are applicable to more general tracking problems that are subject to limited communication resources.
SU-F-T-272: Patient Specific Quality Assurance of Prostate VMAT Plans with Portal Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darko, J; Osei, E; University of Waterloo, Waterloo, ON
Purpose: To evaluate the effectiveness of using the Portal Dosimetry (PD) method for patient specific quality assurance of prostate VMAT plans. Methods: As per institutional protocol all VMAT plans were measured using the Varian Portal Dosimetry (PD) method. A gamma evaluation criterion of 3%-3mm with a minimum area gamma pass rate (gamma <1) of 95% is used clinically for all plans. We retrospectively evaluated the portal dosimetry results for 170 prostate patients treated with VMAT technique. Three sets of criterions were adopted for re-evaluating the measurements; 3%-3mm, 2%-2mm and 1%-1mm. For all criterions two areas, Field+1cm and MLC-CIAO were analysed.Tomore » ascertain the effectiveness of the portal dosimetry technique in determining the delivery accuracy of prostate VMAT plans, 10 patients previously measured with portal dosimetry, were randomly selected and their measurements repeated using the ArcCHECK method. The same criterion used in the analysis of PD was used for the ArcCHECK measurements. Results: All patient plans reviewed met the institutional criteria for Area Gamma pass rate. Overall, the gamma pass rate (gamma <1) decreases for 3%-3mm, 2%-2mm and 1%-1mm criterion. For each criterion the pass rate was significantly reduced when the MLC-CIAO was used instead of FIELD+1cm. There was noticeable change in sensitivity for MLC-CIAO with 2%-2mm criteria and much more significant reduction at 1%-1mm. Comparable results were obtained for the ArcCHECK measurements. Although differences were observed between the clockwise verses the counter clockwise plans in both the PD and ArcCHECK measurements, this was not deemed to be statistically significant. Conclusion: This work demonstrates that Portal Dosimetry technique can be effectively used for quality assurance of VMAT plans. Results obtained show similar sensitivity compared to ArcCheck. To reveal certain delivery inaccuracies, the use of a combination of criterions may provide an effective way in improving the overall sensitivity of PD. Funding provided in part by the Prostate Ride for Dad, Kitchener-Waterloo, Canada.« less
RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.
Evaluation of the NASA Ames no. 1 7 by 10 foot wind tunnel as an acoustic test facility
NASA Technical Reports Server (NTRS)
Wilby, J. F.; Scharton, T. D.
1975-01-01
Measurements were made in the no. 1 7'x10' wind tunnel at NASA Ames Research Center, with the objectives of defining the acoustic characteristics and recommending minimum cost treatments so that the tunnel can be converted into an acoustic research facility. The results indicate that the noise levels in the test section are due to (a) noise generation in the test section, associated with the presence of solid bodies such as the pitot tube, and (b) propagation of acoustic energy from the fan. A criterion for noise levels in the test section is recommended, based on low-noise microphone support systems. Noise control methods required to meet the criterion include removal of hardware items for the test section and diffuser, improved design of microphone supports, and installation of acoustic treatment in the settling chamber and diffuser.
Entanglement-enhanced Neyman-Pearson target detection using quantum illumination
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs
NASA Technical Reports Server (NTRS)
Bloomfield, Harvey S.
1991-01-01
The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.
On the complexity of search for keys in quantum cryptography
NASA Astrophysics Data System (ADS)
Molotkov, S. N.
2016-03-01
The trace distance is used as a security criterion in proofs of security of keys in quantum cryptography. Some authors doubted that this criterion can be reduced to criteria used in classical cryptography. The following question has been answered in this work. Let a quantum cryptography system provide an ɛ-secure key such that ½‖ρ XE - ρ U ⊗ ρ E ‖1 < ɛ, which will be repeatedly used in classical encryption algorithms. To what extent does the ɛ-secure key reduce the number of search steps (guesswork) as compared to the use of ideal keys? A direct relation has been demonstrated between the complexity of the complete consideration of keys, which is one of the main security criteria in classical systems, and the trace distance used in quantum cryptography. Bounds for the minimum and maximum numbers of search steps for the determination of the actual key have been presented.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni
2017-12-01
The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.
Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q
2017-03-22
Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.
Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B
2016-09-01
Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
van Breukelen, Gerard J P; Candel, Math J J M
2018-06-10
Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
2014-03-27
42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Culpepper, Steven Andrew
2016-06-01
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.
Rocadenbosch, F; Soriano, C; Comerón, A; Baldasano, J M
1999-05-20
A first inversion of the backscatter profile and extinction-to-backscatter ratio from pulsed elastic-backscatter lidar returns is treated by means of an extended Kalman filter (EKF). The EKF approach enables one to overcome the intrinsic limitations of standard straightforward nonmemory procedures such as the slope method, exponential curve fitting, and the backward inversion algorithm. Whereas those procedures are inherently not adaptable because independent inversions are performed for each return signal and neither the statistics of the signals nor a priori uncertainties (e.g., boundary calibrations) are taken into account, in the case of the Kalman filter the filter updates itself because it is weighted by the imbalance between the a priori estimates of the optical parameters (i.e., past inversions) and the new estimates based on a minimum-variance criterion, as long as there are different lidar returns. Calibration errors and initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate atmospheric stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables one to retrieve the optical parameters as time-range-dependent functions and hence to track the atmospheric evolution; the performance of this approach is limited only by the quality and availability of the a priori information and the accuracy of the atmospheric model used. The study ends with an encouraging practical inversion of a live scene measured at the Nd:YAG elastic-backscatter lidar station at our premises at the Polytechnic University of Catalonia, Barcelona.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
1990-10-01
type of approach for finding a dense displacement vector field has a time complexity that allows a real - time implementation when an appropriate control...hardly vector fields as they appear in Stereo or motion. The reason for this is the fact that local displacement vector field ( DVF ) esti- mates bave...2 objects’ motion, but that the quantitative optical flow is not a reliable measure of the real motion [VP87, SU87]. This applies even more to the
Becker, Daniel F; Añez, Luis Miguel; Paris, Manuel; Bedregal, Luis; Grilo, Carlos M
2009-01-01
This study examined the internal consistency, factor structure, and diagnostic efficiency of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), criteria for avoidant personality disorder (AVPD) and the extent to which these metrics may be affected by sex. Subjects were 130 monolingual Hispanic adults (90 men, 40 women) who had been admitted to a specialty clinic that provides psychiatric and substance abuse services to Spanish-speaking patients. All were reliably assessed with the Spanish-Language Version of the Diagnostic Interview for DSM-IV Personality Disorders. The AVPD diagnosis was determined by the best-estimate method. After evaluating internal consistency of the AVPD criterion set, an exploratory factor analysis was performed using principal components extraction. Afterward, diagnostic efficiency indices were calculated for all AVPD criteria. Subsequent analyses examined men and women separately. For the overall group, internal consistency of AVPD criteria was good. Exploratory factor analysis revealed a 1-factor solution (accounting for 70% of the variance), supporting the unidimensionality of the AVPD criterion set. The best inclusion criterion was "reluctance to take risks," whereas "interpersonally inhibited" was the best exclusion criterion and the best predictor overall. When men and women were examined separately, similar results were obtained for both internal consistency and factor structure, with slight variations noted between sexes in the patterning of diagnostic efficiency indices. These psychometric findings, which were similar for men and women, support the construct validity of the DSM-IV criteria for AVPD and may also have implications for the treatment of this particular clinical population.
NASA Astrophysics Data System (ADS)
Frederiksen, Carsten S.; Frederiksen, Jorgen S.; Sisson, Janice M.; Osbrough, Stacey L.
2017-05-01
Changes in the characteristics of Southern Hemisphere (SH) storms, in all seasons, during the second half of the twentieth century, have been related to changes in the annual cycle of SH baroclinic instability. In particular, significant negative trends in baroclinic instability, as measured by the Phillips Criterion, have been found in the region of the climatological storm tracks; a zonal band of significant positive trends occur further poleward. Corresponding to this decrease/increase in baroclinic instability there is a decrease/increase in the growth rate of storm formation at these latitudes over this period, and in some cases a preference for storm formation further poleward than normal. Based on model output from a multi-model ensemble (MME) of coupled atmosphere-ocean general circulation models, it is shown that these trends are the result of external radiative forcing, including anthropogenic greenhouse gases, ozone, aerosols and land-use change. The MME is used in an analysis of variance method to separate the internal (natural) variability in the Phillips Criterion from influences associated with anomalous external radiative forcing. In all seasons, the leading externally forced mode has a significant trend and a loading pattern highly correlated with the pattern of trends in the Phillips Criterion. The covariance between the externally forced component of SH rainfall and the leading external mode strongly resembles the MME pattern of SH rainfall trends. A comparison between similar analyses of MME simulations using the second half of the twenty-first century of the Representative Concentration Pathways (RCP) RCP8.5 and RCP4.5 scenarios show that trends in the Phillips Criterion and rainfall are projected to continue and intensify under increasing anthropogenic greenhouse gas concentrations.
SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shojaei, M; Dumitru, N; Pella, S
2016-06-15
Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less
Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach
Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.
1999-01-01
Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction
NASA Astrophysics Data System (ADS)
Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc
2018-02-01
Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
Canivez, Gary L; Watkins, Marley W
2010-12-01
The present study examined the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV; D. Wechsler, 2008a) standardization sample using exploratory factor analysis, multiple factor extraction criteria, and higher order exploratory factor analysis (J. Schmid & J. M. Leiman, 1957) not included in the WAIS-IV Technical and Interpretation Manual (D. Wechsler, 2008b). Results indicated that the WAIS-IV subtests were properly associated with the theoretically proposed first-order factors, but all but one factor-extraction criterion recommended extraction of one or two factors. Hierarchical exploratory analyses with the Schmid and Leiman procedure found that the second-order g factor accounted for large portions of total and common variance, whereas the four first-order factors accounted for small portions of total and common variance. It was concluded that the WAIS-IV provides strong measurement of general intelligence, and clinical interpretation should be primarily at that level.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Experimental study on an FBG strain sensor
NASA Astrophysics Data System (ADS)
Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng
2018-01-01
Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.
Francis, Jill J; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P; Grimshaw, Jeremy M
2010-12-01
In interview studies, sample size is often justified by interviewing participants until reaching 'data saturation'. However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on the theory of planned behaviour, designed to identify three belief categories (Behavioural, Normative and Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or studywise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Studywise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.
On a stronger-than-best property for best prediction
NASA Astrophysics Data System (ADS)
Teunissen, P. J. G.
2008-03-01
The minimum mean squared error (MMSE) criterion is a popular criterion for devising best predictors. In case of linear predictors, it has the advantage that no further distributional assumptions need to be made, other then about the first- and second-order moments. In the spatial and Earth sciences, it is the best linear unbiased predictor (BLUP) that is used most often. Despite the fact that in this case only the first- and second-order moments need to be known, one often still makes statements about the complete distribution, in particular when statistical testing is involved. For such cases, one can do better than the BLUP, as shown in Teunissen (J Geod. doi: 10.1007/s00190-007-0140-6, 2006), and thus devise predictors that have a smaller MMSE than the BLUP. Hence, these predictors are to be preferred over the BLUP, if one really values the MMSE-criterion. In the present contribution, we will show, however, that the BLUP has another optimality property than the MMSE-property, provided that the distribution is Gaussian. It will be shown that in the Gaussian case, the prediction error of the BLUP has the highest possible probability of all linear unbiased predictors of being bounded in the weighted squared norm sense. This is a stronger property than the often advertised MMSE-property of the BLUP.
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
Finding local genome rearrangements.
Simonaitis, Pijus; Swenson, Krister M
2018-01-01
The double cut and join (DCJ) model of genome rearrangement is well studied due to its mathematical simplicity and power to account for the many events that transform gene order. These studies have mostly been devoted to the understanding of minimum length scenarios transforming one genome into another. In this paper we search instead for rearrangement scenarios that minimize the number of rearrangements whose breakpoints are unlikely due to some biological criteria. One such criterion has recently become accessible due to the advent of the Hi-C experiment, facilitating the study of 3D spacial distance between breakpoint regions. We establish a link between the minimum number of unlikely rearrangements required by a scenario and the problem of finding a maximum edge-disjoint cycle packing on a certain transformed version of the adjacency graph. This link leads to a 3/2-approximation as well as an exact integer linear programming formulation for our problem, which we prove to be NP-complete. We also present experimental results on fruit flies, showing that Hi-C data is informative when used as a criterion for rearrangements. A new variant of the weighted DCJ distance problem is addressed that ignores scenario length in its objective function. A solution to this problem provides a lower bound on the number of unlikely moves necessary when transforming one gene order into another. This lower bound aids in the study of rearrangement scenarios with respect to chromatin structure, and could eventually be used in the design of a fixed parameter algorithm with a more general objective function.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Passive Electroreception in Fish: AN Analog Model of the Spike Generation Zone.
NASA Astrophysics Data System (ADS)
Harvey, James Robert
Sensory transduction begins in receptor cells specialized to the sensory modality involved and proceeds to the more generalized stage of the first afferent fiber, converting the initial sensory information into neural spikes for transmittal to the central nervous system. We have developed a unique analog electronic model of the generalized step (also known as the spike generation zone (SGZ)) using a tunnel diode, an operational amplifier, resistors, and capacitors. With no externally applied simulated postsynaptic input current, our model represents a 10^{-3}cm^2 patch (100 times the typical in vivo area) of tonically active, nonadaptive, postsynaptic neural membrane that behaves as a pacemaker cell. Similar to the FitzHugh-Nagumo equations, our model is shown to be a simplification of the Hodgkin-Huxley parallel conductance model and can be analyzed by the methods of van der Pol. Measurements using the model yield results which compare favorably to physiological stimulus-response data gathered by Murray for elasmobranch electroreceptors. We then use the model to show that the main contribution to variance in the rate of neural spike output is provided by coincident inputs to the SGZ oscillator (i.e., by synaptic input noise) and not by inherent instability of the SGZ oscillator. Configured for maximum sensitivity, our model is capable of detecting stimulus changes as low as 50 fA in less than a second and this corresponds to a fractional frequency change of Delta f/f ~ 2 times 10^{-3}. Much data exists implying that in vivo detection of Delta f/f is limited to the range of one to ten percent (Weber-Fechner criterion). We propose the variance induced by the synaptic input noise provides a plausible physiological basis for the Weber-Fechner criterion.
Psychometric validation of a condom self-efficacy scale in Korean.
Cha, EunSeok; Kim, Kevin H; Burke, Lora E
2008-01-01
When an instrument is translated for use in cross-cultural research, it needs to account for cultural factors without distorting the psychometric properties of the instrument. To validate the psychometric properties of the condom self-efficacy scale (CSE) originally developed for American adolescents and young adults after translating the scale to Korean (CSE-K) to determine its suitability for cross-cultural research among Korean college students. A cross-sectional, correlational design was used with an exploratory survey methodology through self-report questionnaires. A convenience sample of 351 students, aged 18 to 25 years, were recruited at a university in Seoul, Korea. The participants completed the CSE-K and the intention of condom use scales after they were translated from English to Korean using a combined translation technique. A demographic and sex history questionnaire, which included an item to assess actual condom usage, was also administered. Mean, variance, reliability, criterion validity, and factorial validity using confirmatory factor analysis were assessed in the CSE-K. Norms for the CSE-K were similar, but not identical, to norms for the English version. The means of all three subscales were lower for the CSE-K than for the original CSE; however, the obtained variance in CSE-K was roughly similar with the original CSE. The Cronbach's alpha coefficient for the total scale was higher for the CSE-K (.91) than that for either the CSE (.85) or CSE in Thai (.85). Criterion validity and construct validity of the CSE-K were confirmed. The CSE-K was a reliable and valid scale in measuring condom self-efficacy among Korean college students. The findings suggest that the CSE was an appropriate instrument to conduct cross-cultural research on sexual behavior in adolescents and young adults.
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2017-06-01
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
Fong, Ted C T; Ho, Rainbow T H
2015-01-01
The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.
Direct numerical simulation of turbulent Rayleigh-Bénard convection in a vertical thin disk
NASA Astrophysics Data System (ADS)
Xu, Wei; Wang, Yin; He, Xiao-Zhou; Yik, Hiu-Fai; Wang, Xiao-Ping; Schumacher, Jorg; Tong, Penger
2017-11-01
We report a direct numerical simulation (DNS) of turbulent Rayleigh-Bénard convection in a thin vertical disk with a high-order spectral element method code NEK5000. An unstructured mesh is used to adapt the turbulent flow in the thin disk and to ensure that the mesh sizes satisfy the refined Groetzbach criterion and a new criterion for thin boundary layers proposed by Shishkina et al. The DNS results for the mean and variance temperature profiles in the thermal boundary layer region are found to be in good agreement with the predictions of the new boundary layer models proposed by Shishkina et al. and Wang et al.. Furthermore, we numerically calculate the five budget terms in the boundary layer equation, which are difficult to measure in experiment. The DNS results agree well with the theoretical predictions by Wang et al. Our numerical work thus provides a strong support for the development of a common framework for understanding the effect of boundary layer fluctuations. This work was supported in part by Hong Kong Research Grants Council.
Leadership styles across hierarchical levels in nursing departments.
Stordeur, S; Vandenberghe, C; D'hoore, W
2000-01-01
Some researchers have reported on the cascading effect of transformational leadership across hierarchical levels. One study examined this effect in nursing, but it was limited to a single hospital. To examine the cascading effect of leadership styles across hierarchical levels in a sample of nursing departments and to investigate the effect of hierarchical level on the relationships between leadership styles and various work outcomes. Based on a sample of eight hospitals, the cascading effect was tested using correlation analysis. The main sources of variation among leadership scores were determined with analyses of variance (ANOVA), and the interaction effect of hierarchical level and leadership styles on criterion variables was tested with moderated regression analysis. No support was found for a cascading effect of leadership across hierarchical levels. Rather, the variation of leadership scores was explained primarily by the organizational context. Transformational leadership had a stronger impact on criterion variables than transactional leadership. Interaction effects between leadership styles and hierarchical level were observed only for perceived unit effectiveness. The hospital's structure and culture are major determinants of leadership styles.
Nicolas, Renaud; Sibon, Igor; Hiba, Bassem
2015-01-01
The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm(2) in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue.
Efficiency of Calamintha officinalis essential oil as preservative in two topical product types.
Nostro, A; Cannatelli, M A; Morelli, I; Musolino, A D; Scuderi, F; Pizzimenti, F; Alonzo, V
2004-01-01
To verify the efficiency of Calamintha officinalis essential oil as natural preservative in two current formulations. The 1.0 and 2.0% (v/v) C. officinalis essential oil was assayed for its preservative activity in two product types (cream and shampoo). The microbial challenge test was performed following the standards proposed by the European Pharmacopoeia Commission (E.P.) concerning topical preparations using standard micro-organisms and in addition wild strains, either in single or mixed cultures were used. The results clearly demonstrated that the C. officinalis essential oil at 2.0% concentration reduced the microbial inoculum satisfying the criterion A of the E.P. in the cream formulation and the criterion B in the shampoo formulation. Standard and wild strains showed a behaviour similar, both in cream and in shampoo formulation, with no significant difference (gerarchic variance, P > 0.05). C. officinalis essential oil confirmed its preservative properties but at higher concentration than that shown in previous studies on cetomacrogol cream. The nature of the formulation in which an essential oil is incorporated as preservative could have considerable effect on its efficacy.
Small Au clusters on a defective MgO(1 0 0) surface
NASA Astrophysics Data System (ADS)
Barcaro, Giovanni; Fortunelli, Alessandro
2008-05-01
The lowest energy structures of small T]>rndm where rndm is a random number (Metropolis criterion), the new configuration is accepted, otherwise the old configuration is kept, and the process is iterated. For each size we performed 3-5 BH runs, each one composed of 20-25 Monte Carlo steps, using a value of 0.5 eV as kT in the Metropolis criterion. Previous experience [13-15] shows that this is sufficient to single out the global minimum for adsorbed clusters of this size, and that the BH approach is more efficient as a global optimization algorithm than other techniques such as simulated annealing [18]. The MgO support was described via an (Mg 12O 12) cluster embedded in an array of ±2.0 a.u. point charges and repulsive pseudopotentials on the positive charges in direct contact with the cluster (see Ref. [15] for more details on the method). The atoms of the oxide cluster and the point charges were located at the lattice positions of the MgO rock-salt bulk structure using the experimental lattice constant of 4.208 Å. At variance with the ), evaluated by subtracting the energy of the oxide surface and of the metal cluster, both frozen in their interacting configuration, from the value of the total energy of the system, and by taking the absolute value; (ii) the binding energy of the metal cluster (E), evaluated by subtracting the energy of the isolated metal atoms from the total energy of the metal cluster in its interacting configuration, and by taking the absolute value; (iii) the metal cluster distortion energy (E), which corresponds to the difference between the energy of the metal cluster in the configuration interacting with the surface minus the energy of the cluster in its lowest-energy gas-phase configuration (a positive quantity); (iv) the oxide distortion energy (ΔE), evaluated subtracting the energy of the relaxed isolated defected oxide from the energy of the isolated defected oxide in the interacting configuration; and (v) the total binding energy (E), which is the sum of the binding energy of the metal cluster, the adhesion energy and the oxide distortion energy (E=E+E-ΔE). Note that the total binding energy of gas-phase clusters in their global minima can be obtained by summing E+E.
A new statistic to express the uncertainty of kriging predictions for purposes of survey planning.
NASA Astrophysics Data System (ADS)
Lark, R. M.; Lapworth, D. J.
2014-05-01
It is well-known that one advantage of kriging for spatial prediction is that, given the random effects model, the prediction error variance can be computed a priori for alternative sampling designs. This allows one to compare sampling schemes, in particular sampling at different densities, and so to decide on one which meets requirements in terms of the uncertainty of the resulting predictions. However, the planning of sampling schemes must account not only for statistical considerations, but also logistics and cost. This requires effective communication between statisticians, soil scientists and data users/sponsors such as managers, regulators or civil servants. In our experience the latter parties are not necessarily able to interpret the prediction error variance as a measure of uncertainty for decision making. In some contexts (particularly the solution of very specific problems at large cartographic scales, e.g. site remediation and precision farming) it is possible to translate uncertainty of predictions into a loss function directly comparable with the cost incurred in increasing precision. Often, however, sampling must be planned for more generic purposes (e.g. baseline or exploratory geochemical surveys). In this latter context the prediction error variance may be of limited value to a non-statistician who has to make a decision on sample intensity and associated cost. We propose an alternative criterion for these circumstances to aid communication between statisticians and data users about the uncertainty of geostatistical surveys based on different sampling intensities. The criterion is the consistency of estimates made from two non-coincident instantiations of a proposed sample design. We consider square sample grids, one instantiation is offset from the second by half the grid spacing along the rows and along the columns. If a sample grid is coarse relative to the important scales of variation in the target property then the consistency of predictions from two instantiations is expected to be small, and can be increased by reducing the grid spacing. The measure of consistency is the correlation between estimates from the two instantiations of the sample grid, averaged over a grid cell. We call this the offset correlation, it can be calculated from the variogram. We propose that this measure is easier to grasp intuitively than the prediction error variance, and has the advantage of having an upper bound (1.0) which will aid its interpretation. This quality measure is illustrated for some hypothetical examples, considering both ordinary kriging and factorial kriging of the variable of interest. It is also illustrated using data on metal concentrations in the soil of north-east England.
SU-F-J-25: Position Monitoring for Intracranial SRS Using BrainLAB ExacTrac Snap Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, S; McCaw, T; Huq, M
2016-06-15
Purpose: To determine the accuracy of position monitoring with BrainLAB ExacTrac snap verification following couch rotations during intracranial SRS. Methods: A CT scan of an anthropomorphic head phantom was acquired using 1.25mm slices. The isocenter was positioned near the centroid of the frontal lobe. The head phantom was initially aligned on the treatment couch using cone-beam CT, then repositioned using ExacTrac x-ray verification with residual errors less than 0.2mm and 0.2°. Snap verification was performed over the full range of couch angles in 15° increments with known positioning offsets of 0–3mm applied to the phantom along each axis. At eachmore » couch angle, the smallest tolerance was determined for which no positioning deviation was detected. Results: For couch angles 30°–60° from the center position, where the longitudinal axis of the phantom is approximately aligned with the beam axis of one x-ray tube, snap verification consistently detected positioning errors exceeding the maximum 8mm tolerance. Defining localization error as the difference between the known offset and the minimum tolerance for which no deviation was detected, the RMS error is mostly less than 1mm outside of couch angles 30°–60° from the central couch position. Given separate measurements of patient position from the two imagers, whether to proceed with treatment can be determined by the criterion of a reading within tolerance from just one (OR criterion) or both (AND criterion) imagers. Using a positioning tolerance of 1.5mm, snap verification has sensitivity and specificity of 94% and 75%, respectively, with the AND criterion, and 67% and 93%, respectively, with the OR criterion. If readings exceeding maximum tolerance are excluded, the sensitivity and specificity are 88% and 86%, respectively, with the AND criterion. Conclusion: With a positioning tolerance of 1.5mm, ExacTrac snap verification can be used during intracranial SRS with sensitivity and specificity between 85% and 90%.« less
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
How quantitative measures unravel design principles in multi-stage phosphorylation cascades.
Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf
2008-09-07
We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.
Use of power analysis to develop detectable significance criteria for sea urchin toxicity tests
Carr, R.S.; Biedenbach, J.M.
1999-01-01
When sufficient data are available, the statistical power of a test can be determined using power analysis procedures. The term “detectable significance” has been coined to refer to this criterion based on power analysis and past performance of a test. This power analysis procedure has been performed with sea urchin (Arbacia punctulata) fertilization and embryological development data from sediment porewater toxicity tests. Data from 3100 and 2295 tests for the fertilization and embryological development tests, respectively, were used to calculate the criteria and regression equations describing the power curves. Using Dunnett's test, a minimum significant difference (MSD) (β = 0.05) of 15.5% and 19% for the fertilization test, and 16.4% and 20.6% for the embryological development test, for α ≤ 0.05 and α ≤ 0.01, respectively, were determined. The use of this second criterion reduces type I (false positive) errors and helps to establish a critical level of difference based on the past performance of the test.
Cellular and dendritic growth in a binary melt - A marginal stability approach
NASA Technical Reports Server (NTRS)
Laxmanan, V.
1986-01-01
A simple model for the constrained growth of an array of cells or dendrites in a binary alloy in the presence of an imposed positive temperature gradient in the liquid is proposed, with the dendritic or cell tip radius calculated using the marginal stability criterion of Langer and Muller-Krumbhaar (1977). This approach, an approach adopting the ad hoc assumption of minimum undercooling at the cell or dendrite tip, and an approach based on the stability criterion of Trivedi (1980) all predict tip radii to within 30 percent of each other, and yield a simple relationship between the tip radius and the growth conditions. Good agreement is found between predictions and data obtained in a succinonitrile-acetone system, and under the present experimental conditions, the dendritic tip stability parameter value is found to be twice that obtained previously, possibly due to a transition in morphology from a cellular structure with just a few side branches, to a more fully developed dendritic structure.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng
2018-05-31
For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Dobbin, Nick; Hunwicks, Richard; Jones, Ben; Till, Kevin; Highton, Jamie; Twist, Craig
2018-02-01
To examine the criterion and construct validity of an isometric midthigh-pull dynamometer to assess whole-body strength in professional rugby league players. Fifty-six male rugby league players (33 senior and 23 youth players) performed 4 isometric midthigh-pull efforts (ie, 2 on the dynamometer and 2 on the force platform) in a randomized and counterbalanced order. Isometric peak force was underestimated (P < .05) using the dynamometer compared with the force platform (95% LoA: -213.5 ± 342.6 N). Linear regression showed that peak force derived from the dynamometer explained 85% (adjusted R 2 = .85, SEE = 173 N) of the variance in the dependent variable, with the following prediction equation derived: predicted peak force = [1.046 × dynamometer peak force] + 117.594. Cross-validation revealed a nonsignificant bias (P > .05) between the predicted and peak force from the force platform and an adjusted R 2 (79.6%) that represented shrinkage of 0.4% relative to the cross-validation model (80%). Peak force was greater for the senior than the youth professionals using the dynamometer (2261.2 ± 222 cf 1725.1 ± 298.0 N, respectively; P < .05). The isometric midthigh pull assessed using a dynamometer underestimates criterion peak force but is capable of distinguishing muscle-function characteristics between professional rugby league players of different standards.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2013-07-01
Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
2012-09-01
by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B
ERIC Educational Resources Information Center
Johnson, Jim
2017-01-01
A growing number of U.S. business schools now offer an undergraduate degree in international business (IB), for which training in a foreign language is a requirement. However, there appears to be considerable variance in the minimum requirements for foreign language training across U.S. business schools, including the provision of…
On the critical flame radius and minimum ignition energy for spherical flame initiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zheng; Burke, M. P.; Ju, Yiguang
2011-01-01
Spherical flame initiation from an ignition kernel is studied theoretically and numerically using different fuel/oxygen/helium/argon mixtures (fuel: hydrogen, methane, and propane). The emphasis is placed on investigating the critical flame radius controlling spherical flame initiation and its correlation with the minimum ignition energy. It is found that the critical flame radius is different from the flame thickness and the flame ball radius and that their relationship depends strongly on the Lewis number. Three different flame regimes in terms of the Lewis number are observed and a new criterion for the critical flame radius is introduced. For mixtures with Lewis numbermore » larger than a critical Lewis number above unity, the critical flame radius is smaller than the flame ball radius but larger than the flame thickness. As a result, the minimum ignition energy can be substantially over-predicted (under-predicted) based on the flame ball radius (the flame thickness). The results also show that the minimum ignition energy for successful spherical flame initiation is proportional to the cube of the critical flame radius. Furthermore, preferential diffusion of heat and mass (i.e. the Lewis number effect) is found to play an important role in both spherical flame initiation and flame kernel evolution after ignition. It is shown that the critical flame radius and the minimum ignition energy increase significantly with the Lewis number. Therefore, for transportation fuels with large Lewis numbers, blending of small molecule fuels or thermal and catalytic cracking will significantly reduce the minimum ignition energy.« less
Na, Muzi; Jennings, Larissa; Talegawkar, Sameera A; Ahmed, Saifuddin
2015-12-01
To explore the relationship between women's empowerment and WHO recommended infant and young child feeding (IYCF) practices in sub-Saharan Africa. Analysis was conducted using data from ten Demographic and Health Surveys between 2010 and 2013. Women's empowerment was assessed by nine standard items covering three dimensions: economic, socio-familial and legal empowerment. Three core IYCF practices examined were minimum dietary diversity, minimum meal frequency and minimum acceptable diet. Separate multivariable logistic regression models were applied for the IYCF practices on dimensional and overall empowerment in each country. Benin, Burkina Faso, Ethiopia, Mali, Niger, Nigeria, Rwanda, Sierra Leone, Uganda and Zimbabwe. Youngest singleton children aged 6-23 months and their mothers (n 15 153). Less than 35 %, 60 % and 18 % of children 6-23 months of age met the criterion of minimum dietary diversity, minimum meal frequency and minimum acceptable diet, respectively. In general, likelihood of meeting the recommended IYCF criteria was positively associated with the economic dimension of women's empowerment. Socio-familial empowerment was negatively associated with the three feeding criteria, except in Zimbabwe. The legal dimension of empowerment did not show any clear pattern in the associations. Greater overall empowerment of women was consistently and positively associated with multiple IYCF practices in Mali, Rwanda and Sierra Leone. However, consistent negative relationships were found in Benin and Niger. Null or mixed results were observed in the remaining countries. The importance of women's empowerment for IYCF practices needs to be discussed by context and by dimension of empowerment.
The production route selection algorithm in virtual manufacturing networks
NASA Astrophysics Data System (ADS)
Krenczyk, D.; Skolud, B.; Olender, M.
2017-08-01
The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.
1977-09-01
Interpolation algorithm allows this to be done when the transition boundaries are defined close together and parallel to one another. In this case the...in the variable kernel esti- -mates.) In [2] a goodness-of-fit criterion for a set of sam- One question of great interest to us in this study pies...an estimate /(x) is For the unimodal case the ab.olute minimum okV .based on the variables ocurs at k .= 100, ce 5. At this point we have j Mean
Shape optimization of the modular press body
NASA Astrophysics Data System (ADS)
Pabiszczak, Stanisław
2016-12-01
A paper contains an optimization algorithm of cross-sectional dimensions of a modular press body for the minimum mass criterion. Parameters of the wall thickness and the angle of their inclination relative to the base of section are assumed as the decision variables. The overall dimensions are treated as a constant. The optimal values of parameters were calculated using numerical method of the tool Solver in the program Microsoft Excel. The results of the optimization procedure helped reduce body weight by 27% while maintaining the required rigidity of the body.
Interpretation of impeller flow calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuzson, J.
1993-09-01
Most available computer programs are analysis and not design programs. Therefore the intervention of the designer is indispensable. Guidelines are needed to evaluate the degree of fluid mechanic perfection of a design which is compromised for practical reasons. A new way of plotting the computer output is proposed here which illustrates the energy distribution throughout the flow. The consequence of deviating from optimal flow pattern is discussed and specific cases are reviewed. A criterion is derived for the existence of a jet/wake flow pattern and for the minimum wake mixing loss.
A methodology based on reduced complexity algorithm for system applications using microprocessors
NASA Technical Reports Server (NTRS)
Yan, T. Y.; Yao, K.
1988-01-01
The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.
San Francisco floating STOLport study
NASA Technical Reports Server (NTRS)
1974-01-01
The operational, economic, environmental, social and engineering feasibility of utilizing deactivated maritime vessels as a waterfront quiet short takeoff and landing facility to be located near the central business district of San Francisco was investigated. Criteria were developed to evaluate each site, and minimum standards were established for each criterion. Predicted conditions at the two sites were compared to the requirements for each of the 11 criteria as a means of evaluating site performance. Criteria include land use, community structure, economic impact, access, visual character, noise, air pollution, natural environment, weather, air traffic, and terminal design.
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
GIS-based niche modeling for mapping species' habitats
Rotenberry, J.T.; Preston, K.L.; Knick, S.
2006-01-01
Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
NASA Astrophysics Data System (ADS)
Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping
2018-02-01
In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Why do Children Differ in Their Development of Reading and Related Skills?
Olson, Richard K.; Keenan, Janice M.; Byrne, Brian; Samuelsson, Stefan
2013-01-01
Modern behavior-genetic studies of twins in the U.S., Australia, Scandinavia, and the U.K. show that genes account for most of the variance in children's reading ability by the end of the first year of formal reading instruction. Strong genetic influence continues across the grades, though the relevant genes vary for reading words and comprehending text, and some of the genetic influence comes through a gene – environment correlation. Strong genetic influences do not diminish the importance of the environment for reading development in the population and for helping struggling readers, but they question setting the same minimal performance criterion for all children. PMID:25104901
Mean-Reverting Portfolio With Budget Constraint
NASA Astrophysics Data System (ADS)
Zhao, Ziping; Palomar, Daniel P.
2018-05-01
This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.
NASA Astrophysics Data System (ADS)
Björnbom, Pehr
2016-03-01
In the first part of this work equilibrium temperature profiles in fluid columns with ideal gas or ideal liquid were obtained by numerically minimizing the column energy at constant entropy, equivalent to maximizing column entropy at constant energy. A minimum in internal plus potential energy for an isothermal temperature profile was obtained in line with Gibbs' classical equilibrium criterion. However, a minimum in internal energy alone for adiabatic temperature profiles was also obtained. This led to a hypothesis that the adiabatic lapse rate corresponds to a restricted equilibrium state, a type of state in fact discussed already by Gibbs. In this paper similar numerical results for a fluid column with saturated air suggest that also the saturated adiabatic lapse rate corresponds to a restricted equilibrium state. The proposed hypothesis is further discussed and amended based on the previous and the present numerical results and a theoretical analysis based on Gibbs' equilibrium theory.
Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, J. M.; Liu, Y.; Li, W.
2013-09-15
An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize amore » regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.« less
A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network
NASA Astrophysics Data System (ADS)
Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.
Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.
NASA Astrophysics Data System (ADS)
Chen, Qingfa; Zhao, Fuyu
2017-12-01
Numerous pillars are left after mining of underground mineral resources using the open stope method or after the first step of the partial filling method. The mineral recovery rate can, however, be improved by replacement recovery of pillars. In the present study, the relationships among the pillar type, minimum pillar width, and micro/macroeconomic factors were investigated from two perspectives, namely mechanical stability and micro/macroeconomic benefit. Based on the mechanical stability formulas for ore and artificial pillars, the minimum width for a specific pillar type was determined using a pessimistic criterion. The microeconomic benefit c of setting an ore pillar, the microeconomic benefit w of artificial pillar replacement, and the economic net present value (ENPV) of the replacement process were calculated. The values of c and w were compared with respect to ENPV, based on which the appropriate pillar type and economical benefit were determined.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi
2016-10-01
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi
2002-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to magnetic discontinuities in PBSs. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
NASA Technical Reports Server (NTRS)
Yamauchi, Y.; Suess, Steven T.; Sakurai, T.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to discontinuities. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
Performance bounds for matched field processing in subsurface object detection applications
NASA Astrophysics Data System (ADS)
Sahin, Adnan; Miller, Eric L.
1998-09-01
In recent years there has been considerable interest in the use of ground penetrating radar (GPR) for the non-invasive detection and localization of buried objects. In a previous work, we have considered the use of high resolution array processing methods for solving these problems for measurement geometries in which an array of electromagnetic receivers observes the fields scattered by the subsurface targets in response to a plane wave illumination. Our approach uses the MUSIC algorithm in a matched field processing (MFP) scheme to determine both the range and the bearing of the objects. In this paper we derive the Cramer-Rao bounds (CRB) for this MUSIC-based approach analytically. Analysis of the theoretical CRB has shown that there exists an optimum inter-element spacing of array elements for which the CRB is minimum. Furthermore, the optimum inter-element spacing minimizing CRB is smaller than the conventional half wavelength criterion. The theoretical bounds are then verified for two estimators using Monte-Carlo simulations. The first estimator is the MUSIC-based MFP and the second one is the maximum likelihood based MFP. The two approaches differ in the cost functions they optimize. We observe that Monte-Carlo simulated error variances always lie above the values established by CRB. Finally, we evaluate the performance of our MUSIC-based algorithm in the presence of model mismatches. Since the detection algorithm strongly depends on the model used, we have tested the performance of the algorithm when the object radius used in the model is different from the true radius. This analysis reveals that the algorithm is still capable of localizing the objects with a bias depending on the degree of mismatch.
Su, Yingjuan; Wang, Ting; Zheng, Bo; Jiang, Yu; Chen, Guopei; Gu, Hongya
2004-11-01
Sequences of chloroplast DNA (cpDNA) atpB- rbcL intergenic spacers of individuals of a tree fern species, Alsophila spinulosa, collected from ten relict populations distributed in the Hainan and Guangdong provinces, and the Guangxi Zhuang region in southern China, were determined. Sequence length varied from 724 bp to 731 bp, showing length polymorphism, and base composition was with high A+T content between 63.17% and 63.95%. Sequences were neutral in terms of evolution (Tajima's criterion D=-1.01899, P>0.10 and Fu and Li's test D*=-1.39008, P>0.10; F*=-1.49775, P>0.10). A total of 19 haplotypes were identified based on nucleotide variation. High levels of haplotype diversity (h=0.744) and nucleotide diversity (Dij=0.01130) were detected in A. spinulosa, probably associated with its long evolutionary history, which has allowed the accumulation of genetic variation within lineages. Both the minimum spanning network and neighbor-joining trees generated for haplotypes demonstrated that current populations of A. spinulosa existing in Hainan, Guangdong, and Guangxi were subdivided into two geographical groups. An analysis of molecular variance indicated that most of the genetic variation (93.49%, P<0.001) was partitioned among regions. Wright's isolation by distance model was not supported across extant populations. Reduced gene flow by the Qiongzhou Strait and inbreeding may result in the geographical subdivision between the Hainan and Guangdong + Guangxi populations (FST=0.95, Nm=0.03). Within each region, the star-like pattern of phylogeography of haplotypes implied a population expansion process during evolutionary history. Gene genealogies together with coalescent theory provided significant information for uncovering phylogeography of A. spinulosa.
Setting Priorities in Global Child Health Research Investments: Addressing Values of Stakeholders
Kapiriri, Lydia; Tomlinson, Mark; Gibson, Jennifer; Chopra, Mickey; El Arifeen, Shams; Black, Robert E.; Rudan, Igor
2007-01-01
Aim To identify main groups of stakeholders in the process of health research priority setting and propose strategies for addressing their systems of values. Methods In three separate exercises that took place between March and June 2006 we interviewed three different groups of stakeholders: 1) members of the global research priority setting network; 2) a diverse group of national-level stakeholders from South Africa; and 3) participants at the conference related to international child health held in Washington, DC, USA. Each of the groups was administered different version of the questionnaire in which they were asked to set weights to criteria (and also minimum required thresholds, where applicable) that were a priori defined as relevant to health research priority setting by the consultants of the Child Health and Nutrition Research initiative (CHNRI). Results At the global level, the wide and diverse group of respondents placed the greatest importance (weight) to the criterion of maximum potential for disease burden reduction, while the most stringent threshold was placed on the criterion of answerability in an ethical way. Among the stakeholders’ representatives attending the international conference, the criterion of deliverability, answerability, and sustainability of health research results was proposed as the most important one. At the national level in South Africa, the greatest weight was placed on the criterion addressing the predicted impact on equity of the proposed health research. Conclusions Involving a large group of stakeholders when setting priorities in health research investments is important because the criteria of relevance to scientists and technical experts, whose knowledge and technical expertise is usually central to the process, may not be appropriate to specific contexts and in accordance with the views and values of those who invest in health research, those who benefit from it, or wider society as a whole. PMID:17948948
Optimization of deformation monitoring networks using finite element strain analysis
NASA Astrophysics Data System (ADS)
Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.
2018-04-01
An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.
Generalized Bohm’s criterion and negative anode voltage fall in electric discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Londer, Ya. I.; Ul’yanov, K. N., E-mail: kulyanov@vei.ru
2013-10-15
The value of the voltage fall across the anode sheath is found as a function of the current density. Analytic solutions are obtained in a wide range of the ratio of the directed velocity of plasma electrons v{sub 0} to their thermal velocity v{sub T}. It is shown that the voltage fall in a one-dimensional collisionless anode sheath is always negative. At the small values of v{sub 0}/v{sub T}, the obtained expression asymptotically transforms into the Langmuir formula. Generalized Bohm’s criterion for an electric discharge with allowance for the space charge density ρ(0), electric field E(0), ion velocity v{sub i}(0),more » and ratio v{sub 0}/v{sub T} at the plasma-sheath interface is formulated. It is shown that the minimum value of the ion velocity v{sub i}{sup *}(0) corresponds to the vanishing of the electric field at one point inside the sheath. The dependence of v{sub i}{sup *} (0) on ρ(0), E(0), and v{sub 0}/v{sub T} determines the boundary of the existence domain of stationary solutions in the sheath. Using this criterion, the maximum possible degree of contraction of the electron current at the anode is determined for a short high-current vacuum arc discharge.« less
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.
van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B
2016-11-24
Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.
Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field
NASA Technical Reports Server (NTRS)
Ghosh, Sanjoy; Roberts, D. Aaron
2010-01-01
We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
Experimental demonstration of quantum teleportation of a squeezed state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takei, Nobuyuki; Aoki, Takao; Yonezawa, Hidehiro
2005-10-15
Quantum teleportation of a squeezed state is demonstrated experimentally. Due to some inevitable losses in experiments, a squeezed vacuum necessarily becomes a mixed state which is no longer a minimum uncertainty state. We establish an operational method of evaluation for quantum teleportation of such a state using fidelity and discuss the classical limit for the state. The measured fidelity for the input state is 0.85{+-}0.05, which is higher than the classical case of 0.73{+-}0.04. We also verify that the teleportation process operates properly for the nonclassical state input and its squeezed variance is certainly transferred through the process. We observemore » the smaller variance of the teleported squeezed state than that for the vacuum state input.« less
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A
2006-01-01
The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.
Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco
2002-01-01
The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...
Solution Methods for Certain Evolution Equations
NASA Astrophysics Data System (ADS)
Vega-Guzman, Jose Manuel
Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.
Zeng, Xing; Chen, Cheng; Wang, Yuanyuan
2012-12-01
In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
Validity of Futrex-5000 for body composition determination.
McLean, K P; Skinner, J S
1992-02-01
Underwater weighing (UWW), skinfolds (SKF), and the Futrex-5000 (FTX) were compared by using UWW as the criterion measure of body fat in 30 male and 31 female Caucasians. Estimates of body fat (% fat) were obtained using The Y's Way to Fitness SKF equations and the standard FTX technique with near-infrared interactance (NIR) measured at the biceps, plus six sites for men and five sites for women. SKF correlated significantly higher with UWW than did FTX with UWW for males (0.95 vs 0.80), females (0.88 vs 0.63), and the whole group (0.94 vs 0.81). Fewer subjects (52%) were within +/- 4% of the UWW value using FTX, compared with 87% with SKF. FTX overestimated body fat in lean subjects with less than 8% fat and underestimated it in subjects with greater than 30% fat. Measuring NIR at additional sites did not improve the predicted variance. Partial F-tests indicate that using body mass index, instead of height and weight, in the FTX equation improved body fat prediction for females. Biceps NIR predicted additional variance in body fat beyond height, weight, frame size, and activity level but little variance above that predicted by these four variables plus SKF (2% more in males and less than 1% in females). Thus, SKF give more information and more accurately predict body fat, especially at the extremes of the body fat continuum.
Understanding the P×S Aspect of Within-Person Variation: A Variance Partitioning Approach
Lakey, Brian
2016-01-01
This article reviews a variance partitioning approach to within-person variation based on Generalizability Theory and the Social Relations Model. The approach conceptualizes an important part of within-person variation as Person × Situation (P×S) interactions: differences among persons in their profiles of responses across the same situations. The approach provided the first quantitative method for capturing within-person variation and demonstrated very large P×S effects for a wide range of constructs. These include anxiety, five-factor personality traits, perceived social support, leadership, and task performance. Although P×S effects are commonly very large, conceptual, and analytic obstacles have thwarted consistent progress. For example, how does one develop a psychological, versus purely statistical, understanding of P×S effects? How does one forecast future behavior when the criterion is a P×S effect? How can understanding P×S effects contribute to psychological theory? This review describes potential solutions to these and other problems developed in the course of conducting research on the P×S aspect of social support. Additional problems that need resolution are identified. PMID:26858661
Development and psychometric testing of the Cancer Knowledge Scale for Elders.
Su, Ching-Ching; Chen, Yuh-Min; Kuo, Bo-Jein
2009-03-01
To develop the Cancer Knowledge Scale for Elders and test its validity and reliability. The number of elders suffering from cancer is increasing. To facilitate cancer prevention behaviours among elders, they shall be educated about cancer-related knowledge. Prior to designing a programme that would respond to the special needs of elders, understanding the cancer-related knowledge within this population was necessary. However, extensive review of the literature revealed a lack of appropriate instruments for measuring cancer-related knowledge. A valid and reliable cancer knowledge scale for elders is necessary. A non-experimental methodological design was used to test the psychometric properties of the Cancer Knowledge Scale for Elders. Item analysis was first performed to screen out items that had low corrected item-total correlation coefficients. Construct validity was examined with a principle component method of exploratory factor analysis. Cancer-related health behaviour was used as the criterion variable to evaluate criterion-related validity. Internal consistency reliability was assessed by the KR-20. Stability was determined by two-week test-retest reliability. The factor analysis yielded a four-factor solution accounting for 49.5% of the variance. For criterion-related validity, cancer knowledge was positively correlated with cancer-related health behaviour (r = 0.78, p < 0.001). The KR-20 coefficients of each factor were 0.85, 0.76, 0.79 and 0.67 and 0.87 for the total scale. Test-retest reliability over a two-week period was 0.83 (p < 0.001). This study provides evidence for content validity, construct validity, criterion-related validity, internal consistency and stability of the Cancer Knowledge Scale for Elders. The results show that this scale is an easy-to-use instrument for elders and has adequate validity and reliability. The scale can be used as an assessment instrument when implementing cancer education programmes for elders. It can also be used to evaluate the effects of education programmes.
Chun, Seokjoon; Harris, Alexa; Carrion, Margely; Rojas, Elizabeth; Stark, Stephen; Lejuez, Carl; Lechner, William V.; Bornovalova, Marina A.
2016-01-01
The comorbidity between Borderline Personality Disorder (BPD) and Antisocial Personality Disorder (ASPD) is well-established, and the two disorders share many similarities. However, there are also differences across disorders: most notably, BPD is diagnosed more frequently in females and ASPD in males. We investigated if a) comorbidity between BPD and ASPD is attributable to two discrete disorders or the expression of common underlying processes, and b) if the model of comorbidity is true across sex. Using a clinical sample of 1400 drug users in residential substance abuse treatment, we tested three competing models to explore whether the comorbidity of ASPD and BPD should be represented by a single common factor, two correlated factors, or a bifactor structure involving a general and disorder-specific factors. Next, we tested whether our resulting model was meaningful by examining its relationship with criterion variables previously reported to be associated with BPD and ASPD. The bifactor model provided the best fit and was invariant across sex. Overall, the general factor of the bifactor model significantly accounted for a large percentage of the variance in criterion variables, whereas the BPD and AAB specific factors added little to the models. The association of the general and specific factor with all criterion variables was equal for males and females. Our results suggest common underlying vulnerability accounts for both the comorbidity between BPD and AAB (across sex), and this common vulnerability drives the association with other psychopathology and maladaptive behavior. This in turn has implications for diagnostic classification systems and treatment. General scientific summary This study found that, for both males and females, borderline and antisocial personality disorders show a large degree of overlap, and little uniqueness. The commonality between BPD and ASPD mainly accounted for associations with criterion variables. This suggests that BPD and ASPD show a large common core that accounts for their comorbidity. PMID:27808543
Schlieve, Thomas; Funderburk, Joseph; Flick, William; Miloro, Michael; Kolokythas, Antonia
2015-03-01
This study investigated the influence of specific criteria on referral selection among general dentists and orthodontists in deciding referrals to oral and maxillofacial surgeons. A cross-sectional study was designed to examine the importance of criteria used by 2 groups of practitioners, general dentists and orthodontists, for deciding on referrals to oral and maxillofacial surgeons. Data were collected by 2 multiple-choice surveys. The surveys were e-mailed to general dentists and orthodontists practicing in the state of Illinois and to graduates from the University of Illinois at Chicago (UIC) College of Dentistry and the UIC Department of Orthodontics. Participants were asked to rate referral criteria from most important to least important. Analysis of variance was used to examine the data for any differences in the importance of the criteria for each question and linear regression analysis was used to determine whether any 1 criterion was statistically meaningful within each group of practitioners. In total, 235 general dental practitioners and 357 orthodontists completed the survey, with a 100% completion rate. The most important criterion for referral to oral and maxillofacial surgeons in the general dentist group was the personal and professional relationship of the referring doctor to the specialist. In the orthodontist group, no single criterion was statistically meaningful. General dentists tend to develop long-term relationships with their patients, and when deciding the appropriate referrals it appears that personal and professional relationships that promote trust and open communication are key elements. General dentists favor these relationships when making referral decisions across a wide spectrum of procedures. Orthodontists do not place a substantial value on a specific criterion for referral and therefore may not develop the same relationships between patient and doctor and between doctors as general dentists. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Validity of two alternative systems for measuring vertical jump height.
Leard, John S; Cirillo, Melissa A; Katsnelson, Eugene; Kimiatek, Deena A; Miller, Tim W; Trebincevic, Kenan; Garbalosa, Juan C
2007-11-01
Vertical jump height is frequently used by coaches, health care professionals, and strength and conditioning professionals to objectively measure function. The purpose of this study is to determine the concurrent validity of the jump and reach method (Vertec) and the contact mat method (Just Jump) in assessing vertical jump height when compared with the criterion reference 3-camera motion analysis system. Thirty-nine college students, 25 females and 14 males between the ages of 18 and 25 (mean age 20.65 years), were instructed to perform the countermovement jump. Reflective markers were placed at the base of the individual's sacrum for the 3-camera motion analysis system to measure vertical jump height. The subject was then instructed to stand on the Just Jump mat beneath the Vertec and perform the jump. Measurements were recorded from each of the 3 systems simultaneously for each jump. The Pearson r statistic between the video and the jump and reach (Vertec) was 0.906. The Pearson r between the video and contact mat (Just Jump) was 0.967. Both correlations were significant at the 0.01 level. Analysis of variance showed a significant difference among the 3 means F(2,235) = 5.51, p < 0.05. The post hoc analysis showed a significant difference between the criterion reference (M = 0.4369 m) and the Vertec (M = 0.3937 m, p = 0.005) but not between the criterion reference and the Just Jump system (M = 0.4420 m, p = 0.972). The Just Jump method of measuring vertical jump height is a valid measure when compared with the 3-camera system. The Vertec was found to have a high correlation with the criterion reference, but the mean differed significantly. This study indicates that a higher degree of confidence is warranted when comparing Just Jump results with a 3-camera system study.
The provision of clearances accuracy in piston - cylinder mating
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.
2017-08-01
The paper is aimed at increasing the quality of the pumping equipment in oil and gas industry. The main purpose of the study is to stabilize maximum values of productivity and durability of the pumping equipment based on the selective assembly of the cylinder-piston kinematic mating by optimization criterion. It is shown that the minimum clearance in the piston-cylinder mating is formed by maximum material dimensions. It is proved that maximum material dimensions are characterized by their own laws of distribution within the tolerance limits for the diameters of the cylinder internal mirror and the outer cylindrical surface of the piston. At that, their dispersion zones should be divided into size groups with a group tolerance equal to half the tolerance for the minimum clearance. The techniques for measuring the material dimensions - the smallest cylinder diameter and the largest piston diameter according to the envelope condition - are developed for sorting them into size groups. Reliable control of the dimensions precision ensures optimal minimum clearances of the piston-cylinder mating in all the size groups of the pumping equipment, necessary for increasing the equipment productivity and durability during the production, operation and repair processes.
NASA Astrophysics Data System (ADS)
Tobias, Karen Marie
An analysis of curriculum frameworks from the fifty states to ascertain the compliance with the National Science Education Standards for integrating Science-Technology-Society (STS) themes is reported within this dissertation. Science standards for all fifty states were analyzed to determine if the STS criteria were integrated at the elementary, middle, and high school levels of education. The analysis determined the compliance level for each state, then compared each educational level to see if the compliance was similar across the levels. Compliance is important because research shows that using STS themes in the science classroom increases the student's understanding of the concepts, increases the student's problem solving skills, increases the student's self-efficacy with respect to science, and students instructed using STS themes score well on science high stakes tests. The two hypotheses for this study are: (1) There is no significant difference in the degree of compliance to Science-Technology-Society themes (derived from National Science Education Standards) between the elementary, middle, and high school levels. (2) There is no significant difference in the degree of compliance to Science-Technology-Society themes (derived from National Science Education Standards) between the elementary, middle, and high school level when examined individually. The Analysis of Variance F ratio was used to determine the variance between and within the three educational levels. This analysis addressed hypothesis one. The Analysis of Variance results refused to reject the null hypothesis, meaning there is significant difference in the compliance to STS themes between the elementary, middle and high school educational levels. The Chi-Square test was the statistical analysis used to compare the educational levels for each individual criterion. This analysis addressed hypothesis two. The Chi-Squared results showed that none of the states were equally compliant with each individual criterion across the elementary, middle, and high school levels. The National Science Education Standards were created with the input of thousands of people and over twenty scientific and educational societies. The standards were tested in numerous classrooms and showed an increase in science literacy for the students. With the No Child Left Behind legislation and Project 2061, the attainment of a science literate society will be helped by the adoption of the NSES standards and the STS themes into the American classrooms.
An optimal strategy for functional mapping of dynamic trait loci.
Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling
2010-02-01
As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.
Measuring social impacts of breast carcinoma treatment in Chinese women.
Fielding, Richard; Lam, Wendy W T
2004-06-15
There is no existing instrument that is suitable for measuring the social impact of breast carcinoma (BC) and its treatment among women of Southern Chinese descent. In the current study, the authors assessed the validity of the Chinese Social Adjustment Scale, which was designed to address the need for such an instrument. Five dimensions of social concern were identified in a previous study of Cantonese-speaking Chinese women with BC; these dimensions were family and other relationships, intimacy, private self-image, and public self-image. The authors designed 40 items to address perceptions of change in these areas. These items were administered to a group of 226 women who had received treatment for BC, and factor analysis subsequently was performed to determine construct characteristics. The resulting draft instrument then was administered, along with other measures for the assessment of basic psychometric properties, to a second group of 367 women who recently had undergone surgery for BC. Factor analysis optimally identified 5 factors (corresponding to 33 items): 1) Relationships with Family (10 items, accounting for 22% of variance); 2) Self-Image (7 items, accounting for 15% of variance); 3) Relationships with Friends (7 items, accounting for 8% of variance); 4) Social Enjoyment (4 items, accounting for 6% of variance); and 5) Attractiveness and Sexuality (5 items, accounting for 5% of variance). Subscales were reliable (alpha = 0.63-0.93) and exhibited convergent validity in positive correlations with related measures and divergent validity in appropriate inverse or nonsignificant correlations with other measures. Criterion validity was good, and sensitivity was acceptable. Patterns of change on the scales were consistent with reports in the literature. Self-administration resulted in improved sensitivity. The 33-item Chinese Social Adjustment Scale validly, reliably, and sensitively measures the social impact of BC on Cantonese-speaking Hong Kong Chinese women. Further development of the scale to increase its sensitivity is underway. Copyright 2004 American Cancer Society.
Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years
NASA Astrophysics Data System (ADS)
Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.
2014-12-01
Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Analysis of Modified SMI Method for Adaptive Array Weight Control. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dilsavor, Ronald Louis
1989-01-01
An adaptive array is used to receive a desired signal in the presence of weak interference signals which need to be suppressed. A modified sample matrix inversion (SMI) algorithm controls the array weights. The modification leads to increased interference suppression by subtracting a fraction of the noise power from the diagonal elements of the covariance matrix. The modified algorithm maximizes an intuitive power ratio criterion. The expected values and variances of the array weights, output powers, and power ratios as functions of the fraction and the number of snapshots are found and compared to computer simulation and real experimental array performance. Reduced-rank covariance approximations and errors in the estimated covariance are also described.
NASA Technical Reports Server (NTRS)
Yorchak, J. P.; Hartley, C. S.; Hinman, E.
1985-01-01
The use of aptitude tests and questionnaries to evaluate an individuals aptitude for teleoperation is studied. The Raven Progressive Matrices Test and Differential Aptitude Tests, and a 16-item questionnaire for assessing the subject's interests, academic background, and previous experience are described. The Proto-Flight Manipulator Arm, cameras, console, hand controller, and task board utilized by the 17 engineers are examined. The correlation between aptitude scores and questionnaire responses, and operator performance is investigated. Multiple regression data reveal that the eight predictor variables are not individually significant for evaluating operator performance; however, the complete test battery is applicable for predicting 49 percent of subject variance on the criterion task.
Development of unauthorized airborne emission source identification procedure
NASA Astrophysics Data System (ADS)
Shtripling, L. O.; Bazhenov, V. V.; Varakina, N. S.; Kupriyanova, N. P.
2018-01-01
The paper presents the procedure for searching sources of unauthorized airborne emissions. To make reasonable regulation decisions on airborne pollutant emissions and to ensure the environmental safety of population, the procedure provides for the determination of a pollutant mass emission value from the source being the cause of high pollution level and the search of a previously unrecognized contamination source in a specified area. To determine the true value of mass emission from the source, the minimum of the mean-root-square mismatch criterion between the computed and measured pollutant concentration in the given location is used.
A new Method for Determining the Interplanetary Current-Sheet Local Orientation
NASA Astrophysics Data System (ADS)
Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.
2003-03-01
In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.
Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl
2016-10-01
The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.
Comparison of reproducibility of natural head position using two methods.
Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik
2012-01-01
Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.
2014-01-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180
Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E
2014-04-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.
Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G
2008-09-01
A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.
NASA Astrophysics Data System (ADS)
Phillips, Nicholas G.; Hu, B. L.
2000-10-01
We present calculations of the variance of fluctuations and of the mean of the energy momentum tensor of a massless scalar field for the Minkowski and Casimir vacua as a function of an intrinsic scale defined by a smeared field or by point separation. We point out that, contrary to prior claims, the ratio of variance to mean-squared being of the order unity is not necessarily a good criterion for measuring the invalidity of semiclassical gravity. For the Casimir topology we obtain expressions for the variance to mean-squared ratio as a function of the intrinsic scale (defined by a smeared field) compared to the extrinsic scale (defined by the separation of the plates, or the periodicity of space). Our results make it possible to identify the spatial extent where negative energy density prevails which could be useful for studying quantum field effects in worm holes and baby universes, and for examining the design feasibility of real-life ``time machines.'' For the Minkowski vacuum we find that the ratio of the variance to the mean-squared, calculated from the coincidence limit, is identical to the value of the Casimir case at the same limit for spatial point separation while identical to the value of a hot flat space result with a temporal point separation. We analyze the origin of divergences in the fluctuations of the energy density and discuss choices in formulating a procedure for their removal, thus raising new questions about the uniqueness and even the very meaning of regularization of the energy momentum tensor for quantum fields in curved or even flat spacetimes when spacetime is viewed as having an extended structure.
Murphy, Alistair P; Duffield, Rob; Kellett, Aaron; Reid, Machar
2014-09-01
To investigate the discrepancy between coach and athlete perceptions of internal load and notational analysis of external load in elite junior tennis. Fourteen elite junior tennis players and 6 international coaches were recruited. Ratings of perceived exertion (RPEs) were recorded for individual drills and whole sessions, along with a rating of mental exertion, coach rating of intended session exertion, and athlete heart rate (HR). Furthermore, total stroke count and unforced-error count were notated using video coding after each session, alongside coach and athlete estimations of shots and errors made. Finally, regression analyses explained the variance in the criterion variables of athlete and coach RPE. Repeated-measures analyses of variance and interclass correlation coefficients revealed that coaches significantly (P < .01) underestimated athlete session RPE, with only moderate correlation (r = .59) demonstrated between coach and athlete. However, athlete drill RPE (P = .14; r = .71) and mental exertion (P = .44; r = .68) were comparable and substantially correlated. No significant differences in estimated stroke count were evident between athlete and coach (P = .21), athlete notational analysis (P = .06), or coach notational analysis (P = .49). Coaches estimated significantly greater unforced errors than either athletes or notational analysis (P < .01). Regression analyses found that 54.5% of variance in coach RPE was explained by intended session exertion and coach drill RPE, while drill RPE and peak HR explained 45.3% of the variance in athlete session RPE. Coaches misinterpreted session RPE but not drill RPE, while inaccurately monitoring error counts. Improved understanding of external- and internal-load monitoring may help coach-athlete relationships in individual sports like tennis avoid maladaptive training.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
A New Look at Some Solar Wind Turbulence Puzzles
NASA Technical Reports Server (NTRS)
Roberts, Aaron
2006-01-01
Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.
Minimum depth of soil cover above long-span soil-steel railway bridges
NASA Astrophysics Data System (ADS)
Esmaeili, Morteza; Zakeri, Jabbar Ali; Abdulrazagh, Parisa Haji
2013-12-01
Recently, soil-steel bridges have become more commonly used as railway-highway crossings because of their economical advantages and short construction period compared with traditional bridges. The currently developed formula for determining the minimum depth of covers by existing codes is typically based on vehicle loads and non-stiffened panels and takes into consideration the geometrical shape of the metal structure to avoid the failure of soil cover above a soil-steel bridge. The effects of spans larger than 8 m or more stiffened panels due to railway loads that maintain a safe railway track have not been accounted for in the minimum cover formulas and are the subject of this paper. For this study, two-dimensional finite element (FE) analyses of four low-profile arches and four box culverts with spans larger than 8 m were performed to develop new patterns for the minimum depth of soil cover by considering the serviceability criterion of the railway track. Using the least-squares method, new formulas were then developed for low-profile arches and box culverts and were compared with Canadian Highway Bridge Design Code formulas. Finally, a series of three-dimensional (3D) finite element FE analyses were carried out to control the out-of-plane buckling in the steel plates due to the 3D pattern of train loads. The results show that the out-of-plane bending does not control the buckling behavior of the steel plates, so the proposed equations for minimum depth of cover can be appropriately used for practical purposes.
NASA Astrophysics Data System (ADS)
Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun
2017-06-01
Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.
Job Tasks as Determinants of Thoracic Aerosol Exposure in the Cement Production Industry.
Notø, Hilde; Nordby, Karl-Christian; Skare, Øivind; Eduard, Wijnand
2017-12-15
The aims of this study were to identify important determinants and investigate the variance components of thoracic aerosol exposure for the workers in the production departments of European cement plants. Personal thoracic aerosol measurements and questionnaire information (Notø et al., 2015) were the basis for this study. Determinants categorized in three levels were selected to describe the exposure relationships separately for the job types production, cleaning, maintenance, foreman, administration, laboratory, and other jobs by linear mixed models. The influence of plant and job determinants on variance components were explored separately and also combined in full models (plant&job) against models with no determinants (null). The best mixed models (best) describing the exposure for each job type were selected by the lowest Akaike information criterion (AIC; Akaike, 1974) after running all possible combination of the determinants. Tasks that significantly increased the thoracic aerosol exposure above the mean level for production workers were: packing and shipping, raw meal, cement and filter cleaning, and de-clogging of the cyclones. For maintenance workers, time spent with welding and dismantling before repair work increased the exposure while time with electrical maintenance and oiling decreased the exposure. Administration work decreased the exposure among foremen. A subjective tidiness factor scored by the research team explained up to a 3-fold (cleaners) variation in thoracic aerosol levels. Within-worker (WW) variance contained a major part of the total variance (35-58%) for all job types. Job determinants had little influence on the WW variance (0-4% reduction), some influence on the between-plant (BP) variance (from 5% to 39% reduction for production, maintenance, and other jobs respectively but an 79% increase for foremen) and a substantial influence on the between-worker within-plant variance (30-96% for production, foremen, and other workers). Plant determinants had little influence on the WW variance (0-2% reduction), some influence on the between-worker variance (0-1% reduction and 8% increase), and considerable influence on the BP variance (36-58% reduction) compared to the null models. Some job tasks contribute to low levels of thoracic aerosol exposure and others to higher exposure among cement plant workers. Thus, job task may predict exposure in this industry. Dust control measures in the packing and shipping departments and in the areas of raw meal and cement handling could contribute substantially to reduce the exposure levels. Rotation between low and higher exposed tasks may contribute to equalize the exposure levels between high and low exposed workers as a temporary solution before more permanent dust reduction measures is implemented. A tidy plant may reduce the overall exposure for almost all workers no matter of job type. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
An expert system for planning and scheduling in a telerobotic environment
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.; Park, Eui H.
1991-01-01
A knowledge based approach to assigning tasks to multi-agents working cooperatively in jobs that require a telerobot in the loop was developed. The generality of the approach allows for such a concept to be applied in a nonteleoperational domain. The planning architecture known as the task oriented planner (TOP) uses the principle of flow mechanism and the concept of planning by deliberation to preserve and use knowledge about a particular task. The TOP is an open ended architecture developed with a NEXPERT expert system shell and its knowledge organization allows for indirect consultation at various levels of task abstraction. Considering that a telerobot operates in a hostile and nonstructured environment, task scheduling should respond to environmental changes. A general heuristic was developed for scheduling jobs with the TOP system. The technique is not to optimize a given scheduling criterion as in classical job and/or flow shop problems. For a teleoperation job schedule, criteria are situation dependent. A criterion selection is fuzzily embedded in the task-skill matrix computation. However, goal achievement with minimum expected risk to the human operator is emphasized.
Damage Propagation Modeling for Aircraft Engine Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil
2008-01-01
This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.
A parsimonious tree-grow method for haplotype inference.
Li, Zhenping; Zhou, Wenfeng; Zhang, Xiang-Sun; Chen, Luonan
2005-09-01
Haplotype information has become increasingly important in analyzing fine-scale molecular genetics data, such as disease genes mapping and drug design. Parsimony haplotyping is one of haplotyping problems belonging to NP-hard class. In this paper, we aim to develop a novel algorithm for the haplotype inference problem with the parsimony criterion, based on a parsimonious tree-grow method (PTG). PTG is a heuristic algorithm that can find the minimum number of distinct haplotypes based on the criterion of keeping all genotypes resolved during tree-grow process. In addition, a block-partitioning method is also proposed to improve the computational efficiency. We show that the proposed approach is not only effective with a high accuracy, but also very efficient with the computational complexity in the order of O(m2n) time for n single nucleotide polymorphism sites in m individual genotypes. The software is available upon request from the authors, or from http://zhangroup.aporc.org/bioinfo/ptg/ chen@elec.osaka-sandai.ac.jp Supporting materials is available from http://zhangroup.aporc.org/bioinfo/ptg/bti572supplementary.pdf
Oo, W M; Linklater, J M; Daniel, M; Saarakkala, S; Samuels, J; Conaghan, P G; Keen, H I; Deveza, L A; Hunter, D J
2018-05-01
The aims of this study were to systematically review clinimetrics of commonly assessed ultrasound pathologies in knee, hip and hand osteoarthritis (OA), and to conduct a meta-analysis for each clinimetric. Medline, Embase, and Cochrane Library databases were searched from their inceptions to September 2016. According to the Outcome Measures in Rheumatology (OMERACT) Instrument Selection Algorithm, data extraction focused on ultrasound technical features and performance metrics. Methodological quality was assessed with modified 19-item Downs and Black score and 11-item Quality Appraisal of Diagnostic Reliability (QAREL) score. Separate meta-analyses were performed for clinimetrics: (1) inter-rater/intra-rater reliability; (2) construct validity; (3) criteria validity; and (4) internal/external responsiveness. Statistical Package for the Social Sciences (SPSS), Excel and Comprehensive Meta-analysis were used. Our search identified 1126 records; of these, 100 were eligible, including a total of 8542 patients and 32,373 joints. The average Downs and Black score was 13.01, and average QAREL was 5.93. The stratified meta-analysis was performed only for knee OA, which demonstrated moderate to substantial reliability [minimum kappa > 0.44(0.15,0.74), minimum intraclass correlation coefficient (ICC) > 0.82(0.73-0.89)], weak construct validity against pain (r = 0.12 to 0.27), function (r = 0.15 to 0.23), and blood biomarkers (r = 0.01 to 0.21), but weak to strong correlation with plain radiography (r = 0.13 to 0.60), strong association with Magnetic Resonance Imaging (MRI) [minimum r = 0.60(0.52,0.67)] and strong discrimination against symptomatic patients (OR = 3.08 to 7.46). There was strong criterion validity against cartilage histology [r = 0.66(-0.05,0.93)], and small to moderate internal [standardized mean difference(SMD) = 0.20 to 0.58] and external (r = 0.35 to 0.43) responsiveness to interventions. Ultrasound demonstrated strong criterion validity with cartilage histology, poor to strong correlation with patient findings and MRI, moderate reliability, and low responsiveness to interventions. CRD42016039954. Copyright © 2018 Osteoarthritis Research Society International. All rights reserved.
Progress Report on Alloy 617 Time Dependent Allowables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Julie Knibloe
2015-06-01
Time dependent allowable stresses are required in the ASME Boiler and Pressure Vessel Code for design of components in the temperature range where time dependent deformation (i.e., creep) is expected to become significant. There are time dependent allowable stresses in Section IID of the Code for use in the non-nuclear construction codes, however, there are additional criteria that must be considered in developing time dependent allowables for nuclear components. These criteria are specified in Section III NH. St is defined as the lesser of three quantities: 100% of the average stress required to obtain a total (elastic, plastic, primary andmore » secondary creep) strain of 1%; 67% of the minimum stress to cause rupture; and 80% of the minimum stress to cause the initiation of tertiary creep. The values are reported for a range of temperatures and for time increments up to 100,000 hours. These values are determined from uniaxial creep tests, which involve the elevated temperature application of a constant load which is relatively small, resulting in deformation over a long time period prior to rupture. The stress which is the minimum resulting from these criteria is the time dependent allowable stress St. In this report data from a large number of creep and creep-rupture tests on Alloy 617 are analyzed using the ASME Section III NH criteria. Data which are used in the analysis are from the ongoing DOE sponsored high temperature materials program, form Korea Atomic Energy Institute through the Generation IV VHTR Materials Program and historical data from previous HTR research and vendor data generated in developing the alloy. It is found that the tertiary creep criterion determines St at highest temperatures, while the stress to cause 1% total strain controls at low temperatures. The ASME Section III Working Group on Allowable Stress Criteria has recommended that the uncertainties associated with determining the onset of tertiary creep and the lack of significant cavitation associated with early tertiary creep strain suggest that the tertiary creep criteria is not appropriate for this material. If the tertiary creep criterion is dropped from consideration, the stress to rupture criteria determines St at all but the lowest temperatures.« less
Stream-temperature patterns of the Muddy Creek basin, Anne Arundel County, Maryland
Pluhowski, E.J.
1981-01-01
Using a water-balance equation based on a 4.25-year gaging-station record on North Fork Muddy Creek, the following mean annual values were obtained for the Muddy Creek basin: precipitation, 49.0 inches; evapotranspiration, 28.0 inches; runoff, 18.5 inches; and underflow, 2.5 inches. Average freshwater outflow from the Muddy Creek basin to the Rhode River estuary was 12.2 cfs during the period October 1, 1971, to December 31, 1975. Harmonic equations were used to describe seasonal maximum and minimum stream-temperature patterns at 12 sites in the basin. These equations were fitted to continuous water-temperature data obtained periodically at each site between November 1970 and June 1978. The harmonic equations explain at least 78 percent of the variance in maximum stream temperatures and 81 percent of the variance in minimum temperatures. Standard errors of estimate averaged 2.3C (Celsius) for daily maximum water temperatures and 2.1C for daily minimum temperatures. Mean annual water temperatures developed for a 5.4-year base period ranged from 11.9C at Muddy Creek to 13.1C at Many Fork Branch. The largest variations in stream temperatures were detected at thermograph sites below ponded reaches and where forest coverage was sparse or missing. At most sites the largest variations in daily water temperatures were recorded in April whereas the smallest were in September and October. The low thermal inertia of streams in the Muddy Creek basin tends to amplify the impact of surface energy-exchange processes on short-period stream-temperature patterns. Thus, in response to meteorologic events, wide ranging stream-temperature perturbations of as much as 6C have been documented in the basin. (USGS)
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
Commentary: legal minimum tread depth for passenger car tires in the U.S.A.--a survey.
Blythe, William; Seguin, Debra E
2006-06-01
Available tire traction is a significant highway safety issue, particularly on wet roads. Tire-roadway friction on dry, clean roads is essentially independent of tread depth, and depends primarily on roadway surface texture. However, tire-wet-roadway friction, both for longitudinal braking and lateral cornering forces, depends on several variables, most importantly on water depth, speed and tire tread depth, and the roadway surface texture. The car owner-operator has control over speed and tire condition, but not on water depth or road surface texture. Minimum tire tread depth is legislated throughout most of the United States and Europe. Speed reduction for wet road conditions is not.A survey of state requirements for legal minimum tread depth for passenger vehicle tires in the United States is presented. Most states require a minimum of 2/32 of an inch (approximately 1.6 mm) of tread, but two require less, some have no requirements, and some defer to the federal criterion for commercial vehicle safety inspections. The requirement of 2/32 of an inch is consistent with the height of the tread-wear bars built in to passenger car tires sold in the United States, but the rationale for that requirement, or other existing requirements, is not clear. Recent research indicates that a minimum tread depth of 2/32 of an inch does not prevent significant loss of friction at highway speeds, even for minimally wet roadways. The research suggests that tires with less than 4/32 of an inch tread depth may lose approximately 50 percent of available friction in those circumstances, even before hydroplaning occurs. It is concluded that the present requirements for minimum passenger car tire tread depth are not based upon rational safety considerations, and that an increase in the minimum tread depth requirements would have a beneficial effect on highway safety.
Baldi, F; Alencar, M M; Albuquerque, L G
2010-12-01
The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.
A new instrument to measure quality of life of heart failure family caregivers.
Nauser, Julie A; Bakas, Tamilyn; Welch, Janet L
2011-01-01
Family caregivers of heart failure (HF) patients experience poor physical and mental health leading to poor quality of life. Although several quality-of-life measures exist, they are often too generic to capture the unique experience of this population. The purpose of this study was to evaluate the psychometric properties of the Family Caregiver Quality of Life (FAMQOL) Scale that was designed to assess the physical, psychological, social, and spiritual dimensions of quality of life among caregivers of HF patients. Psychometric testing of the FAMQOL with 100 HF family caregivers was conducted using item analysis, Cronbach α, intraclass correlation, factor analysis, and hierarchical multiple regression guided by a conceptual model. Caregivers were predominately female (89%), white, (73%), and spouses (62%). Evidence of internal consistency reliability (α=.89) was provided for the FAMQOL, with item-total correlations of 0.39 to 0.74. Two-week test-retest reliability was supported by an intraclass correlation coefficient of 0.91. Using a 1-factor solution and principal axis factoring, loadings ranged from 0.31 to 0.78, with 41% of the variance explained by the first factor (eigenvalue=6.5). With hierarchical multiple regression, 56% of the FAMQOL variance was explained by model constructs (F8,91=16.56, P<.001). Criterion-related validity was supported by correlations with SF-36 General (r=0.45, P<.001) and Mental (r=0.59, P<.001) Health subscales and Bakas Caregiving Outcomes Scale (r=0.73, P<.001). Evidence of internal and test-retest reliability and construct and criterion validity was provided for physical, psychological, and social well-being subscales. The 16-item FAMQOL is a brief, easy-to-administer instrument that has evidence of reliability and validity in HF family caregivers. Physical, psychological, and social well-being can be measured with 4-item subscales. The FAMQOL scale could serve as a valuable measure in research, as well as an assessment tool to identify caregivers in need of intervention.
NASA Astrophysics Data System (ADS)
Chen, C.-H.; Tan, T. Y.
1995-10-01
Using the theoretically calculated point-defect total-energy values of Baraff and Schlüter in GaAs, an amphoteric-defect model has been proposed by Walukiewicz to explain a large number of experimental results. The suggested amphoteric-defect system consists of two point-defect species capable of transforming into each other: the doubly negatively charged Ga vacancy V {Ga/2-} and the triply positively charged defect complex (ASGa+ V As)3+, with AsGa being the antisite defect of an As atom occupying a Ga site and V As being an As vacancy. When present in sufficiently high concentrations, the amphoteric defect system V {Ga/2-}/(AsGa+ V As)3+ is supposed to be able to pin the GaAs Fermi level at approximately the E v +0.6 eV level position, which requires that the net free energy of the V Ga/(AsGa+ V As) defect system to be minimum at the same Fermi-level position. We have carried out a quantitative study of the net energy of this defect system in accordance with the individual point-defect total-energy results of Baraff and Schlüter, and found that the minimum net defect-system-energy position is located at about the E v +1.2 eV level position instead of the needed E v +0.6 eV position. Therefore, the validity of the amphoteric-defect model is in doubt. We have proposed a simple criterion for determining the Fermi-level pinning position in the deeper part of the GaAs band gap due to two oppositely charged point-defect species, which should be useful in the future.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Tank Investigation of a Powered Dynamic Model of a Large Long-Range Flying Boat
NASA Technical Reports Server (NTRS)
Parkinson, John B; Olson, Roland E; Harr, Marvin I
1947-01-01
Principles for designing the optimum hull for a large long-range flying boat to meet the requirements of seaworthiness, minimum drag, and ability to take off and land at all operational gross loads were incorporated in a 1/12-size powered dynamic model of a four-engine transport flying boat having a design gross load of 165,000 pounds. These design principles included the selection of a moderate beam loading, ample forebody length, sufficient depth of step, and close adherence to the form of a streamline body. The aerodynamic and hydrodynamic characteristics of the model were investigated in Langley tank no. 1. Tests were made to determine the minimum allowable depth of step for adequate landing stability, the suitability of the fore-and-aft location of the step, the take-off performance, the spray characteristics, and the effects of simple spray-control devices. The application of the design criterions used and test results should be useful in the preliminary design of similar large flying boats.
On the optimization of discrete structures with aeroelastic constraints
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.; Ashley, H.
1978-01-01
The paper deals with the problem of dynamic structural optimization where constraints relating to flutter of a wing (or other dynamic aeroelastic performance) are imposed along with conditions of a more conventional nature such as those relating to stress under load, deflection, minimum dimensions of structural elements, etc. The discussion is limited to a flutter problem for a linear system with a finite number of degrees of freedom and a single constraint involving aeroelastic stability, and the structure motion is assumed to be a simple harmonic time function. Three search schemes are applied to the minimum-weight redesign of a particular wing: the first scheme relies on the method of feasible directions, while the other two are derived from necessary conditions for a local optimum so that they can be referred to as optimality-criteria schemes. The results suggest that a heuristic redesign algorithm involving an optimality criterion may be best suited for treating multiple constraints with large numbers of design variables.
Use of the Collaborative Optimization Architecture for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Moore, A. A.; Kroo, I. M.
1996-01-01
Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
NASA Astrophysics Data System (ADS)
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
Response to selection while maximizing genetic variance in small populations.
Cervantes, Isabel; Gutiérrez, Juan Pablo; Meuwissen, Theo H E
2016-09-20
Rare breeds represent a valuable resource for future market demands. These populations are usually well-adapted, but their low census compromises the genetic diversity and future of these breeds. Since improvement of a breed for commercial traits may also confer higher probabilities of survival for the breed, it is important to achieve good responses to artificial selection. Therefore, efficient genetic management of these populations is essential to ensure that they respond adequately to genetic selection in possible future artificial selection scenarios. Scenarios that maximize the maximum genetic variance in a unique population could be a valuable option. The aim of this work was to study the effect of the maximization of genetic variance to increase selection response and improve the capacity of a population to adapt to a new environment/production system. We simulated a random scenario (A), a full-sib scenario (B), a scenario applying the maximum variance total (MVT) method (C), a MVT scenario with a restriction on increases in average inbreeding (D), a MVT scenario with a restriction on average individual increases in inbreeding (E), and a minimum coancestry scenario (F). Twenty replicates of each scenario were simulated for 100 generations, followed by 10 generations of selection. Effective population size was used to monitor the outcomes of these scenarios. Although the best response to selection was achieved in scenarios B and C, they were discarded because they are unpractical. Scenario A was also discarded because of its low response to selection. Scenario D yielded less response to selection and a smaller effective population size than scenario E, for which response to selection was higher during early generations because of the moderately structured population. In scenario F, response to selection was slightly higher than in Scenario E in the last generations. Application of MVT with a restriction on individual increases in inbreeding resulted in the largest response to selection during early generations, but if inbreeding depression is a concern, a minimum coancestry scenario is then a valuable alternative, in particular for a long-term response to selection.
NASA Astrophysics Data System (ADS)
Chen, Sang; Hoffmann, Sharon S.; Lund, David C.; Cobb, Kim M.; Emile-Geay, Julien; Adkins, Jess F.
2016-05-01
The El Niño-Southern Oscillation (ENSO) is the primary driver of interannual climate variability in the tropics and subtropics. Despite substantial progress in understanding ocean-atmosphere feedbacks that drive ENSO today, relatively little is known about its behavior on centennial and longer timescales. Paleoclimate records from lakes, corals, molluscs and deep-sea sediments generally suggest that ENSO variability was weaker during the mid-Holocene (4-6 kyr BP) than the late Holocene (0-4 kyr BP). However, discrepancies amongst the records preclude a clear timeline of Holocene ENSO evolution and therefore the attribution of ENSO variability to specific climate forcing mechanisms. Here we present δ18 O results from a U-Th dated speleothem in Malaysian Borneo sampled at sub-annual resolution. The δ18 O of Borneo rainfall is a robust proxy of regional convective intensity and precipitation amount, both of which are directly influenced by ENSO activity. Our estimates of stalagmite δ18 O variance at ENSO periods (2-7 yr) show a significant reduction in interannual variability during the mid-Holocene (3240-3380 and 5160-5230 yr BP) relative to both the late Holocene (2390-2590 yr BP) and early Holocene (6590-6730 yr BP). The Borneo results are therefore inconsistent with lacustrine records of ENSO from the eastern equatorial Pacific that show little or no ENSO variance during the early Holocene. Instead, our results support coral, mollusc and foraminiferal records from the central and eastern equatorial Pacific that show a mid-Holocene minimum in ENSO variance. Reduced mid-Holocene interannual δ18 O variability in Borneo coincides with an overall minimum in mean δ18 O from 3.5 to 5.5 kyr BP. Persistent warm pool convection would tend to enhance the Walker circulation during the mid-Holocene, which likely contributed to reduced ENSO variance during this period. This finding implies that both convective intensity and interannual variability in Borneo are driven by coupled air-sea dynamics that are sensitive to precessional insolation forcing. Isolating the exact mechanisms that drive long-term ENSO evolution will require additional high-resolution paleoclimatic reconstructions and further investigation of Holocene tropical climate evolution using coupled climate models.
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng
2017-06-01
The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.
Sleep reactivity and insomnia: genetic and environmental influences.
Drake, Christopher L; Friedman, Naomi P; Wright, Kenneth P; Roth, Thomas
2011-09-01
Determine the genetic and environmental contributions to sleep reactivity and insomnia. Population-based twin cohort. 1782 individual twins (988 monozygotic or MZ; 1,086 dizygotic or DZ), including 744 complete twin pairs (377 MZ and 367 DZ). Mean age was 22.5 ± 2.8 years; gender distribution was 59% women. Sleep reactivity was measured using the Ford Insomnia Response to Stress Test (FIRST). The criterion for insomnia was having difficulty falling asleep, staying asleep, or nonrefreshing sleep "usually or always" for ≥ 1 month, with at least "somewhat" interference with daily functioning. The prevalence of insomnia was 21%. Heritability estimates for sleep reactivity were 29% for females and 43% for males. The environmental variance for sleep reactivity was greater for females and entirely due to nonshared effects. Insomnia was 43% to 55% heritable for males and females, respectively; the sex difference was not significant. The genetic variances in insomnia and FIRST scores were correlated (r = 0.54 in females, r = 0.64 in males), as were the environmental variances (r = 0.32 in females, r = 0.37 in males). In terms of individual insomnia symptoms, difficulty staying asleep (25% to 35%) and nonrefreshing sleep (34% to 35%) showed relatively more genetic influences than difficulty falling asleep (0%). Sleep reactivity to stress has a substantial genetic component, as well as an environmental component. The finding that FIRST scores and insomnia symptoms share genetic influences is consistent with the hypothesis that sleep reactivity may be a genetic vulnerability for developing insomnia.
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
Almkvist, Ove; Bosnes, Ole; Bosnes, Ingunn; Stordal, Eystein
2017-01-01
Background Subjective memory is commonly considered to be a unidimensional measure. However, theories of performance-based memory suggest that subjective memory could be divided into more than one dimension. Objective To divide subjective memory into theoretically related components of memory and explore the relationship to disease. Methods In this study, various aspects of self-reported memory were studied with respect to demographics and diseases in the third wave of the HUNT epidemiological study in middle Norway. The study included all individuals 55 years of age or older, who responded to a nine-item questionnaire on subjective memory and questionnaires on health (n=18 633). Results A principle component analysis of the memory items resulted in two memory components; the criterion used was an eigenvalue above 1, which accounted for 54% of the total variance. The components were interpreted as long-term memory (LTM; the first component; 43% of the total variance) and short-term memory (STM; the second component; 11% of the total variance). Memory impairment was significantly related to all diseases (except Bechterew’s disease), most strongly to brain infarction, heart failure, diabetes, cancer, chronic obstructive pulmonary disease and whiplash. For most diseases, the STM component was more affected than the LTM component; however, in cancer, the opposite pattern was seen. Conclusions Subjective memory impairment as measured in HUNT contained two components, which were differentially associated with diseases. PMID:28490551
An exploration of the role of subordinate affect in leader evaluations.
Martinko, Mark J; Mackey, Jeremy D; Moss, Sherry E; Harvey, Paul; McAllister, Charn P; Brees, Jeremy R
2018-03-26
Leadership research has been encumbered by a proliferation of constructs and measures, despite little evidence that each is sufficiently conceptually and operationally distinct from the others. We draw from research on subordinates' implicit theories of leader behavior, behaviorally anchored rating scales, and decision making to argue that leader affect (i.e., the degree to which subordinates have positive and negative feelings about their supervisors) underlies the common variance shared by many leadership measures. To explore this possibility, we developed and validated measures of positive and negative leader affect (i.e., the Leader Affect Questionnaires; LAQs). We conducted 10 studies to develop the five-item positive and negative LAQs and to examine their convergent, discriminant, predictive, and criterion-related validity. We conclude that a) the LAQs provide highly reliable and valid tools for assessing subordinates' evaluations of their leaders; b) there is significant overlap between existing leadership measures, and a large proportion of this overlap is a function of the affect captured by the LAQs; c) when the LAQs are used as control variables, in most cases, they reduce the strength of relationships between leadership measures and other variables; d) the LAQs account for significant variance in outcomes beyond that explained by other leadership measures; and e) there is a considerable amount of unexplained variance between leadership measures that the LAQs do not capture. Research suggestions are provided and the implications of our results are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Harwell, Glenn R.; Mobley, Craig A.
2009-01-01
This report, done by the U.S. Geological Survey in cooperation with Dallas/Fort Worth International (DFW) Airport in 2008, describes the occurrence and distribution of fecal indicator bacteria (fecal coliform and Escherichia [E.] coli), and the physical and chemical indicators of water quality (relative to Texas Surface Water Quality Standards), in streams receiving discharge from DFW Airport and vicinity. At sampling sites in the lower West Fork Trinity River watershed during low-flow conditions, geometric mean E. coli counts for five of the eight West Fork Trinity River watershed sampling sites exceeded the Texas Commission on Environmental Quality E. coli criterion, thus not fully supporting contact recreation. Two of the five sites with geometric means that exceeded the contact recreation criterion are airport discharge sites, which here means that the major fraction of discharge at those sites is from DFW Airport. At sampling sites in the Elm Fork Trinity River watershed during low-flow conditions, geometric mean E. coli counts exceeded the geometric mean contact recreation criterion for seven (four airport, three non-airport) of 13 sampling sites. Under low-flow conditions in the lower West Fork Trinity River watershed, E. coli counts for airport discharge sites were significantly different from (lower than) E. coli counts for non-airport sites. Under low-flow conditions in the Elm Fork Trinity River watershed, there was no significant difference between E. coli counts for airport sites and non-airport sites. During stormflow conditions, fecal indicator bacteria counts at the most downstream (integrator) sites in each watershed were considerably higher than counts at those two sites during low-flow conditions. When stormflow sample counts are included with low-flow sample counts to compute a geometric mean for each site, classification changes from fully supporting to not fully supporting contact recreation on the basis of the geometric mean contact recreation criterion. All water temperature measurements at sampling sites in the lower West Fork Trinity River watershed were less than the maximum criterion for water temperature for the lower West Fork Trinity segment. Of the measurements at sampling sites in the Elm Fork Trinity River watershed, 95 percent were less than the maximum criterion for water temperature for the Elm Fork Trinity River segment. All dissolved oxygen concentrations were greater than the minimum criterion for stream segments classified as exceptional aquatic life use. Nearly all pH measurements were within the pH criterion range for the classified segments in both watersheds, except for those at one airport site. For sampling sites in the lower West Fork Trinity River watershed, all annual average dissolved solids concentrations were less than the maximum criterion for the lower West Fork Trinity segment. For sampling sites in the Elm Fork Trinity River, nine of the 13 sites (six airport, three non-airport) had annual averages that exceeded the maximum criterion for that segment. For ammonia, 23 samples from 12 different sites had concentrations that exceeded the screening level for ammonia. Of these 12 sites, only one non-airport site had more than the required number of exceedances to indicate a screening level concern. Stormflow total suspended solids concentrations were significantly higher than low-flow concentrations at the two integrator sites. For sampling sites in the lower West Fork Trinity River watershed, all annual average chloride concentrations were less than the maximum annual average chloride concentration criterion for that segment. For the 13 sampling sites in the Elm Fork Trinity River watershed, one non-airport site had an annual average concentration that exceeded the maximum annual average chloride concentration criterion for that segment.
Plasma dynamics on current-carrying magnetic flux tubes
NASA Technical Reports Server (NTRS)
Swift, Daniel W.
1992-01-01
A 1D numerical simulation is used to investigate the evolution of a plasma in a current-carrying magnetic flux tube of variable cross section. A large potential difference, parallel to the magnetic field, is applied across the domain. The result is that density minimum tends to deepen, primarily in the cathode end, and the entire potential drop becomes concentrated across the region of density minimum. The evolution of the simulation shows some sensitivity to particle boundary conditions, but the simulations inevitably evolve into a final state with a nearly stationary double layer near the cathode end. The simulation results are at sufficient variance with observations that it appears unlikely that auroral electrons can be explained by a simple process of acceleration through a field-aligned potential drop.
A New Method for Estimating the Effective Population Size from Allele Frequency Changes
Pollak, Edward
1983-01-01
A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147
A simulation study of Large Area Crop Inventory Experiment (LACIE) technology
NASA Technical Reports Server (NTRS)
Ziegler, L. (Principal Investigator); Potter, J.
1979-01-01
The author has identified the following significant results. The LACIE performance predictor (LPP) was used to replicate LACIE phase 2 for a 15 year period, using accuracy assessment results for phase 2 error components. Results indicated that the (LPP) simulated the LACIE phase 2 procedures reasonably well. For the 15 year simulation, only 7 of the 15 production estimates were within 10 percent of the true production. The simulations indicated that the acreage estimator, based on CAMS phase 2 procedures, has a negative bias. This bias was too large to support the 90/90 criterion with the CV observed and simulated for the phase 2 production estimator. Results of this simulation study validate the theory that the acreage variance estimator in LACIE was conservative.
Effectiveness of basic display augmentation in vehicular control by visual field cues
NASA Technical Reports Server (NTRS)
Grunwald, A. J.; Merhav, S. J.
1978-01-01
The paper investigates the effectiveness of different basic display augmentation concepts - fixed reticle, velocity vector, and predicted future vehicle path - for RPVs controlled by a vehicle-mounted TV camera. The task is lateral manual control of a low flying RPV along a straight reference line in the presence of random side gusts. The man-machine system and the visual interface are modeled as a linear time-invariant system. Minimization of a quadratic performance criterion is assumed to underlie the control strategy of a well-trained human operator. The solution for the optimal feedback matrix enables the explicit computation of the variances of lateral deviation and directional error of the vehicle and of the control force that are used as performance measures.
Do Formal Inspections Ensure that British Zoos Meet and Improve on Minimum Animal Welfare Standards?
Draper, Chris; Browne, William; Harris, Stephen
2013-11-08
We analysed two consecutive inspection reports for each of 136 British zoos made by government-appointed inspectors between 2005 and 2011 to assess how well British zoos were complying with minimum animal welfare standards; median interval between inspections was 1,107 days. There was no conclusive evidence for overall improvements in the levels of compliance by British zoos. Having the same zoo inspector at both inspections affected the outcome of an inspection; animal welfare criteria were more likely to be assessed as unchanged if the same inspector was present on both inspections. This, and erratic decisions as to whether a criterion applied to a particular zoo, suggest inconsistency in assessments between inspectors. Zoos that were members of a professional association (BIAZA) did not differ significantly from non-members in the overall number of criteria assessed as substandard at the second inspection but were more likely to meet the standards on both inspections and less likely to have criteria remaining substandard. Lack of consistency between inspectors, and the high proportion of zoos failing to meet minimum animal welfare standards nearly thirty years after the Zoo Licensing Act came into force, suggest that the current system of licensing and inspection is not meeting key objectives and requires revision.
Aggen, S. H.; Neale, M. C.; Røysamb, E.; Reichborn-Kjennerud, T.; Kendler, K. S.
2009-01-01
Background Despite its importance as a paradigmatic personality disorder, little is known about the measurement invariance of the DSM-IV borderline personality disorder (BPD) criteria ; that is, whether the criteria assess the disorder equivalently across different groups. Method BPD criteria were evaluated at interview in 2794 young adult Norwegian twins. Analyses, based on item-response modeling, were conducted to test for differential age and sex moderation of the individual BPD criteria characteristics given factor-level covariate effects. Results Confirmatory factor analytic results supported a unidimensional structure for the nine BPD criteria. Compared to males, females had a higher BPD factor mean, larger factor variance and there was a significant age by sex interaction on the factor mean. Strong differential sex and age by sex interaction effects were found for the ‘ impulsivity ’ criterion factor loading and threshold. Impulsivity related to the BPD factor poorly in young females but improved significantly in older females. Males reported more impulsivity compared to females and this difference increased with age. The ‘ affective instability ’ threshold was also moderated, with males reporting less than expected. Conclusions The results suggest the DSM-IV BPD ‘ impulsivity ’ and ‘ affective instability ’ criteria function differentially with respect to age and sex, with impulsivity being especially problematic. If verified, these findings have important implications for the interpretation of prior research with these criteria. These non-invariant age and sex effects may be identifying criteria-level expression features relevant to BPD nosology and etiology. Criterion functioning assessed using modern psychometric methods should be considered in the development of DSM-V. PMID:19400977
Ruch, Willibald; Heintz, Sonja
2017-01-01
How strongly does humor (i.e., the construct-relevant content) in the Humor Styles Questionnaire (HSQ; Martin et al., 2003) determine the responses to this measure (i.e., construct validity)? Also, how much does humor influence the relationships of the four HSQ scales, namely affiliative, self-enhancing, aggressive, and self-defeating, with personality traits and subjective well-being (i.e., criterion validity)? The present paper answers these two questions by experimentally manipulating the 32 items of the HSQ to only (or mostly) contain humor (i.e., construct-relevant content) or to substitute the humor content with non-humorous alternatives (i.e., only assessing construct-irrelevant context). Study 1 (N = 187) showed that the HSQ affiliative scale was mainly determined by humor, self-enhancing and aggressive were determined by both humor and non-humorous context, and self-defeating was primarily determined by the context. This suggests that humor is not the primary source of the variance in three of the HQS scales, thereby limiting their construct validity. Study 2 (N = 261) showed that the relationships of the HSQ scales to the Big Five personality traits and subjective well-being (positive affect, negative affect, and life satisfaction) were consistently reduced (personality) or vanished (subjective well-being) when the non-humorous contexts in the HSQ items were controlled for. For the HSQ self-defeating scale, the pattern of relationships to personality was also altered, supporting an positive rather than a negative view of the humor in this humor style. The present findings thus call for a reevaluation of the role that humor plays in the HSQ (construct validity) and in the relationships to personality and well-being (criterion validity). PMID:28473794
Comparing Multiple Criteria for Species Identification in Two Recently Diverged Seabirds
Militão, Teresa; Gómez-Díaz, Elena; Kaliontzopoulou, Antigoni; González-Solís, Jacob
2014-01-01
Correct species identification is a crucial issue in systematics with key implications for prioritising conservation effort. However, it can be particularly challenging in recently diverged species due to their strong similarity and relatedness. In such cases, species identification requires multiple and integrative approaches. In this study we used multiple criteria, namely plumage colouration, biometric measurements, geometric morphometrics, stable isotopes analysis (SIA) and genetics (mtDNA), to identify the species of 107 bycatch birds from two closely related seabird species, the Balearic (Puffinus mauretanicus) and Yelkouan (P. yelkouan) shearwaters. Biometric measurements, stable isotopes and genetic data produced two stable clusters of bycatch birds matching the two study species, as indicated by reference birds of known origin. Geometric morphometrics was excluded as a species identification criterion since the two clusters were not stable. The combination of plumage colouration, linear biometrics, stable isotope and genetic criteria was crucial to infer the species of 103 of the bycatch specimens. In the present study, particularly SIA emerged as a powerful criterion for species identification, but temporal stability of the isotopic values is critical for this purpose. Indeed, we found some variability in stable isotope values over the years within each species, but species differences explained most of the variance in the isotopic data. Yet this result pinpoints the importance of examining sources of variability in the isotopic data in a case-by-case basis prior to the cross-application of the SIA approach to other species. Our findings illustrate how the integration of several methodological approaches can help to correctly identify individuals from recently diverged species, as each criterion measures different biological phenomena and species divergence is not expressed simultaneously in all biological traits. PMID:25541978
Ziebart, Christina; Giangregorio, Lora M; Gibbs, Jenna C; Levine, Iris C; Tung, James; Laing, Andrew C
2017-06-14
A wide variety of accelerometer systems, with differing sensor characteristics, are used to detect impact loading during physical activities. The study examined the effects of system characteristics on measured peak impact loading during a variety of activities by comparing outputs from three separate accelerometer systems, and by assessing the influence of simulated reductions in operating range and sampling rate. Twelve healthy young adults performed seven tasks (vertical jump, box drop, heel drop, and bilateral single leg and lateral jumps) while simultaneously wearing three tri-axial accelerometers including a criterion standard laboratory-grade unit (Endevco 7267A) and two systems primarily used for activity-monitoring (ActiGraph GT3X+, GCDC X6-2mini). Peak acceleration (gmax) was compared across accelerometers, and errors resulting from down-sampling (from 640 to 100Hz) and range-limiting (to ±6g) the criterion standard output were characterized. The Actigraph activity-monitoring accelerometer underestimated gmax by an average of 30.2%; underestimation by the X6-2mini was not significant. Underestimation error was greater for tasks with greater impact magnitudes. gmax was underestimated when the criterion standard signal was down-sampled (by an average of 11%), range limited (by 11%), and by combined down-sampling and range-limiting (by 18%). These effects explained 89% of the variance in gmax error for the Actigraph system. This study illustrates that both the type and intensity of activity should be considered when selecting an accelerometer for characterizing impact events. In addition, caution may be warranted when comparing impact magnitudes from studies that use different accelerometers, and when comparing accelerometer outputs to osteogenic impact thresholds proposed in literature. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Comparison of random regression test-day models for Polish Black and White cattle.
Strabel, T; Szyda, J; Ptak, E; Jamrozik, J
2005-10-01
Test-day milk yields of first-lactation Black and White cows were used to select the model for routine genetic evaluation of dairy cattle in Poland. The population of Polish Black and White cows is characterized by small herd size, low level of production, and relatively early peak of lactation. Several random regression models for first-lactation milk yield were initially compared using the "percentage of squared bias" criterion and the correlations between true and predicted breeding values. Models with random herd-test-date effects, fixed age-season and herd-year curves, and random additive genetic and permanent environmental curves (Legendre polynomials of different orders were used for all regressions) were chosen for further studies. Additional comparisons included analyses of the residuals and shapes of variance curves in days in milk. The low production level and early peak of lactation of the breed required the use of Legendre polynomials of order 5 to describe age-season lactation curves. For the other curves, Legendre polynomials of order 3 satisfactorily described daily milk yield variation. Fitting third-order polynomials for the permanent environmental effect made it possible to adequately account for heterogeneous residual variance at different stages of lactation.
PCA feature extraction for change detection in multidimensional unlabeled data.
Kuncheva, Ludmila I; Faithfull, William J
2014-01-01
When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.
NASA Astrophysics Data System (ADS)
Li, Zhi; Jin, Jiming
2017-11-01
Projected hydrological variability is important for future resource and hazard management of water supplies because changes in hydrological variability can cause more disasters than changes in the mean state. However, climate change scenarios downscaled from Earth System Models (ESMs) at single sites cannot meet the requirements of distributed hydrologic models for simulating hydrological variability. This study developed multisite multivariate climate change scenarios via three steps: (i) spatial downscaling of ESMs using a transfer function method, (ii) temporal downscaling of ESMs using a single-site weather generator, and (iii) reconstruction of spatiotemporal correlations using a distribution-free shuffle procedure. Multisite precipitation and temperature change scenarios for 2011-2040 were generated from five ESMs under four representative concentration pathways to project changes in streamflow variability using the Soil and Water Assessment Tool (SWAT) for the Jing River, China. The correlation reconstruction method performed realistically for intersite and intervariable correlation reproduction and hydrological modeling. The SWAT model was found to be well calibrated with monthly streamflow with a model efficiency coefficient of 0.78. It was projected that the annual mean precipitation would not change, while the mean maximum and minimum temperatures would increase significantly by 1.6 ± 0.3 and 1.3 ± 0.2 °C; the variance ratios of 2011-2040 to 1961-2005 were 1.15 ± 0.13 for precipitation, 1.15 ± 0.14 for mean maximum temperature, and 1.04 ± 0.10 for mean minimum temperature. A warmer climate was predicted for the flood season, while the dry season was projected to become wetter and warmer; the findings indicated that the intra-annual and interannual variations in the future climate would be greater than in the current climate. The total annual streamflow was found to change insignificantly but its variance ratios of 2011-2040 to 1961-2005 increased by 1.25 ± 0.55. Streamflow variability was predicted to become greater over most months on the seasonal scale because of the increased monthly maximum streamflow and decreased monthly minimum streamflow. The increase in streamflow variability was attributed mainly to larger positive contributions from increased precipitation variances rather than negative contributions from increased mean temperatures.
Separation Potential for Multicomponent Mixtures: State-of-the Art of the Problem
NASA Astrophysics Data System (ADS)
Sulaberidze, G. A.; Borisevich, V. D.; Smirnov, A. Yu.
2017-03-01
Various approaches used in introducing a separation potential (value function) for multicomponent mixtures have been analyzed. It has been shown that all known potentials do not satisfy the Dirac-Peierls axioms for a binary mixture of uranium isotopes, which makes their practical application difficult. This is mainly due to the impossibility of constructing a "standard" cascade, whose role in the case of separation of binary mixtures is played by the ideal cascade. As a result, the only universal search method for optimal parameters of the separation cascade is their numerical optimization by the criterion of the minimum number of separation elements in it.
Predicting propagation limits of laser-supported detonation by Hugoniot analysis
NASA Astrophysics Data System (ADS)
Shimamura, Kohei; Ofosu, Joseph A.; Komurasaki, Kimiya; Koizumi, Hiroyuki
2015-01-01
Termination conditions of a laser-supported detonation (LSD) wave were investigated using control volume analysis with a Shimada-Hugoniot curve and a Rayleigh line. Because the geometric configurations strongly affect the termination condition, a rectangular tube was used to create the quasi-one-dimensional configuration. The LSD wave propagation velocity and the pressure behind LSD were measured. Results reveal that the detonation states during detonation and at the propagation limit are overdriven detonation and Chapman-Jouguet detonation, respectively. The termination condition is the minimum velocity criterion for the possible detonation solution. Results were verified using pressure measurements of the stagnation pressure behind the LSD wave.
Did the American Academy of Orthopaedic Surgeons osteoarthritis guidelines miss the mark?
Bannuru, Raveendhara R; Vaysbrot, Elizaveta E; McIntyre, Louis F
2014-01-01
The American Academy of Orthopaedic Surgeons (AAOS) 2013 guidelines for knee osteoarthritis recommended against the use of viscosupplementation for failing to meet the criterion of minimum clinically important improvement (MCII). However, the AAOS's methodology contained numerous flaws in obtaining, displaying, and interpreting MCII-based results. The current state of research on MCII allows it to be used only as a supplementary instrument, not a basis for clinical decision making. The AAOS guidelines should reflect this consideration in their recommendations to avoid condemning potentially viable treatments in the context of limited available alternatives. Copyright © 2014 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Online clustering algorithms for radar emitter classification.
Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max
2005-08-01
Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.
A holistic framework for design of cost-effective minimum water utilization network.
Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N
2008-07-01
Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion.
[Development and validity of workplace bullying in nursing-type inventory (WPBN-TI)].
Lee, Younju; Lee, Mihyoung
2014-04-01
The purpose of this study was to develop an instrument to assess bullying of nurses, and test the validity and reliability of the instrument. The initial thirty items of WPBN-TI were identified through a review of the literature on types bullying related to nursing and in-depth interviews with 14 nurses who experienced bullying at work. Sixteen items were developed through 2 content validity tests by 9 experts and 10 nurses. The final WPBN-TI instrument was evaluated by 458 nurses from five general hospitals in the Incheon metropolitan area. SPSS 18.0 program was used to assess the instrument based on internal consistency reliability, construct validity, and criterion validity. WPBN-TI consisted of 16 items with three distinct factors (verbal and nonverbal bullying, work-related bullying, and external threats), which explained 60.3% of the total variance. The convergent validity and determinant validity for WPBN-TI were 100.0%, 89.7%, respectively. Known-groups validity of WPBN-TI was proven through the mean difference between subjective perception of bullying. The satisfied criterion validity for WPBN-TI was more than .70. The reliability of WPBN-TI was Cronbach's α of .91. WPBN-TI with high validity and reliability is suitable to determine types of bullying in nursing workplace.
NASA Astrophysics Data System (ADS)
Dicecco, S.; Butcher, C.; Worswick, M.; Boettcher, E.; Chu, E.; Shi, C.
2016-11-01
The forming limit behaviour of AA6013-T6 aluminium alloy sheet was characterized under isothermal conditions at room temperature (RT) and 250°C using limiting dome height (LDH) tests. Full field strain measurements were acquired throughout testing using in situ stereoscopic digital image correlation (DIC) techniques. Limit strain data was generated from the resulting full field strain measurements using two localized necking criteria: ISO12004- 2:2008 and a time and position dependent criterion, termed the “Necking Zone” (NZ) approach in this paper, introduced by Martinez-Donaire et al. (2014). The limit strains resulting from the two localization detection schemes were compared. It was found that the ISO and NZ limit strains at RT are similar on the draw-side of the FLD, while the NZ approach yields a biaxial major limit strain 14.8% greater than the ISO generated major limit strain. At 250°C, the NZ generated major limit strains are 31-34% greater than the ISO generated major limit strains for near uniaxial, plane strain and biaxial loading conditions, respectively. The significant variance in limit strains between the two methodologies at 250°C highlights the need for a validation study regarding warm FLC determination.
Development and initial validation of the appropriate antibiotic use self-efficacy scale.
Hill, Erin M; Watkins, Kaitlin
2018-06-04
While there are various medication self-efficacy scales that exist, none assess self-efficacy for appropriate antibiotic use. The Appropriate Antibiotic Use Self-Efficacy Scale (AAUSES) was developed, pilot tested, and its psychometric properties were examined. Following pilot testing of the scale, a 28-item questionnaire was examined using a sample (n = 289) recruited through the Amazon Mechanical Turk platform. Participants also completed other scales and items, which were used in assessing discriminant, convergent, and criterion-related validity. Test-retest reliability was also examined. After examining the scale and removing items that did not assess appropriate antibiotic use, an exploratory factor analysis was conducted on 13 items from the original scale. Three factors were retained that explained 65.51% of the variance. The scale and its subscales had adequate internal consistency. The scale had excellent test-retest reliability, as well as demonstrated convergent, discriminant, and criterion-related validity. The AAUSES is a valid and reliable scale that assesses three domains of appropriate antibiotic use self-efficacy. The AAUSES may have utility in clinical and research settings in understanding individuals' beliefs about appropriate antibiotic use and related behavioral correlates. Future research is needed to examine the scale's utility in these settings. Copyright © 2018 Elsevier B.V. All rights reserved.
Rating the raters in a mixed model: An approach to deciphering the rater reliability
NASA Astrophysics Data System (ADS)
Shang, Junfeng; Wang, Yougui
2013-05-01
Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Jarman, Kristin H.; Harvey, Scott D.
2005-05-28
A fundamental problem in analysis of highly multivariate spectral or chromatographic data is reduction of dimensionality. Principal components analysis (PCA), concerned with explaining the variance-covariance structure of the data, is a commonly used approach to dimension reduction. Recently an attractive alternative to PCA, sequential projection pursuit (SPP), has been introduced. Designed to elicit clustering tendencies in the data, SPP may be more appropriate when performing clustering or classification analysis. However, the existing genetic algorithm (GA) implementation of SPP has two shortcomings, computation time and inability to determine the number of factors necessary to explain the majority of the structure inmore » the data. We address both these shortcomings. First, we introduce a new SPP algorithm, a random scan sampling algorithm (RSSA), that significantly reduces computation time. We compare the computational burden of the RSS and GA implementation for SPP on a dataset containing Raman spectra of twelve organic compounds. Second, we propose a Bayes factor criterion, BFC, as an effective measure for selecting the number of factors needed to explain the majority of the structure in the data. We compare SPP to PCA on two datasets varying in type, size, and difficulty; in both cases SPP achieves a higher accuracy with a lower number of latent variables.« less
Identification, Characterization, and Utilization of Adult Meniscal Progenitor Cells
2017-11-01
approach including row scaling and Ward’s minimum variance method was chosen. This analysis revealed two groups of four samples each. For the selected...articular cartilage in an ovine model. Am J Sports Med. 2008;36(5):841-50. 7. Deshpande BR, Katz JN, Solomon DH, Yelin EH, Hunter DJ, Messier SP, et al...Miosge1,* 1Tissue Regeneration Work Group , Department of Prosthodontics, Medical Faculty, Georg-August-University, 37075 Goettingen, Germany 2Institute of
2017-12-01
carefully to ensure only minimum information needed for effective management control is requested. Requires cost-benefit analysis and PM...baseline offers metrics that highlights performance treads and program variances. This information provides Program Managers and higher levels of...The existing training philosophy is effective only if the managers using the information have well trained and experienced personnel that can
Ways to improve your correlation functions
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.
The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey
2004-05-10
aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well
NASA Astrophysics Data System (ADS)
Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang
2018-03-01
This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Demographics of an ornate box turtle population experiencing minimal human-induced disturbances
Converse, S.J.; Iverson, J.B.; Savidge, J.A.
2005-01-01
Human-induced disturbances may threaten the viability of many turtle populations, including populations of North American box turtles. Evaluation of the potential impacts of these disturbances can be aided by long-term studies of populations subject to minimal human activity. In such a population of ornate box turtles (Terrapene ornata ornata) in western Nebraska, we examined survival rates and population growth rates from 1981-2000 based on mark-recapture data. The average annual apparent survival rate of adult males was 0.883 (SE = 0.021) and of adult females was 0.932 (SE = 0.014). Minimum winter temperature was the best of five climate variables as a predictor of adult survival. Survival rates were highest in years with low minimum winter temperatures, suggesting that global warming may result in declining survival. We estimated an average adult population growth rate (????) of 1.006 (SE = 0.065), with an estimated temporal process variance (????2) of 0.029 (95% CI = 0.005-0.176). Stochastic simulations suggest that this mean and temporal process variance would result in a 58% probability of a population decrease over a 20-year period. This research provides evidence that, unless unknown density-dependent mechanisms are operating in the adult age class, significant human disturbances, such as commercial harvest or turtle mortality on roads, represent a potential risk to box turtle populations. ?? 2005 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Ma, Yuanxu; Huang, He Qing
2016-07-01
Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.
Do Formal Inspections Ensure that British Zoos Meet and Improve on Minimum Animal Welfare Standards?
Draper, Chris; Browne, William; Harris, Stephen
2013-01-01
Simple Summary Key aims of the formal inspections of British zoos are to assess compliance with minimum standards of animal welfare and promote improvements in animal care and husbandry. We compared reports from two consecutive inspections of 136 British zoos to see whether these goals were being achieved. Most zoos did not meet all the minimum animal welfare standards and there was no clear evidence of improving levels of compliance with standards associated with the Zoo Licensing Act 1981. The current system of licensing and inspection does not ensure that British zoos meet and maintain, let alone exceed, the minimum animal welfare standards. Abstract We analysed two consecutive inspection reports for each of 136 British zoos made by government-appointed inspectors between 2005 and 2011 to assess how well British zoos were complying with minimum animal welfare standards; median interval between inspections was 1,107 days. There was no conclusive evidence for overall improvements in the levels of compliance by British zoos. Having the same zoo inspector at both inspections affected the outcome of an inspection; animal welfare criteria were more likely to be assessed as unchanged if the same inspector was present on both inspections. This, and erratic decisions as to whether a criterion applied to a particular zoo, suggest inconsistency in assessments between inspectors. Zoos that were members of a professional association (BIAZA) did not differ significantly from non-members in the overall number of criteria assessed as substandard at the second inspection but were more likely to meet the standards on both inspections and less likely to have criteria remaining substandard. Lack of consistency between inspectors, and the high proportion of zoos failing to meet minimum animal welfare standards nearly thirty years after the Zoo Licensing Act came into force, suggest that the current system of licensing and inspection is not meeting key objectives and requires revision. PMID:26479752
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Classification of feeding and eating disorders: review of evidence and proposals for ICD-11
UHER, RUDOLF; RUTTER, MICHAEL
2012-01-01
Current classification of eating disorders is failing to classify most clinical presentations; ignores continuities between child, adolescent and adult manifestations; and requires frequent changes of diagnosis to accommodate the natural course of these disorders. The classification is divorced from clinical practice, and investigators of clinical trials have felt compelled to introduce unsystematic modifications. Classification of feeding and eating disorders in ICD-11 requires substantial changes to remediate the shortcomings. We review evidence on the developmental and cross-cultural differences and continuities, course and distinctive features of feeding and eating disorders. We make the following recommendations: a) feeding and eating disorders should be merged into a single grouping with categories applicable across age groups; b) the category of anorexia nervosa should be broadened through dropping the requirement for amenorrhoea, extending the weight criterion to any significant underweight, and extending the cognitive criterion to include developmentally and culturally relevant presentations; c) a severity qualifier “with dangerously low body weight” should distinguish the severe cases of anorexia nervosa that carry the riskiest prognosis; d) bulimia nervosa should be extended to include subjective binge eating; e) binge eating disorder should be included as a specific category defined by subjective or objective binge eating in the absence of regular compensatory behaviour; f) combined eating disorder should classify subjects who sequentially or concurrently fulfil criteria for both anorexia and bulimia nervosa; g) avoidant/restrictive food intake disorder should classify restricted food intake in children or adults that is not accompanied by body weight and shape related psychopathology; h) a uniform minimum duration criterion of four weeks should apply. PMID:22654933
Effects of Phasor Measurement Uncertainty on Power Line Outage Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chen; Wang, Jianhui; Zhu, Hao
2014-12-01
Phasor measurement unit (PMU) technology provides an effective tool to enhance the wide-area monitoring systems (WAMSs) in power grids. Although extensive studies have been conducted to develop several PMU applications in power systems (e.g., state estimation, oscillation detection and control, voltage stability analysis, and line outage detection), the uncertainty aspects of PMUs have not been adequately investigated. This paper focuses on quantifying the impact of PMU uncertainty on power line outage detection and identification, in which a limited number of PMUs installed at a subset of buses are utilized to detect and identify the line outage events. Specifically, the linemore » outage detection problem is formulated as a multi-hypothesis test, and a general Bayesian criterion is used for the detection procedure, in which the PMU uncertainty is analytically characterized. We further apply the minimum detection error criterion for the multi-hypothesis test and derive the expected detection error probability in terms of PMU uncertainty. The framework proposed provides fundamental guidance for quantifying the effects of PMU uncertainty on power line outage detection. Case studies are provided to validate our analysis and show how PMU uncertainty influences power line outage detection.« less
Enhancing phonon flow through one-dimensional interfaces by impedance matching
NASA Astrophysics Data System (ADS)
Polanco, Carlos A.; Ghosh, Avik W.
2014-08-01
We extend concepts from microwave engineering to thermal interfaces and explore the principles of impedance matching in 1D. The extension is based on the generalization of acoustic impedance to nonlinear dispersions using the contact broadening matrix Γ(ω), extracted from the phonon self energy. For a single junction, we find that for coherent and incoherent phonons, the optimal thermal conductance occurs when the matching Γ(ω) equals the Geometric Mean of the contact broadenings. This criterion favors the transmission of both low and high frequency phonons by requiring that (1) the low frequency acoustic impedance of the junction matches that of the two contacts by minimizing the sum of interfacial resistances and (2) the cut-off frequency is near the minimum of the two contacts, thereby reducing the spillage of the states into the tunneling regime. For an ultimately scaled single atom/spring junction, the matching criterion transforms to the arithmetic mean for mass and the harmonic mean for spring constant. The matching can be further improved using a composite graded junction with an exponential varying broadening that functions like a broadband antireflection coating. There is, however, a trade off as the increased length of the interface brings in additional intrinsic sources of scattering.
A Semi-analytic Criterion for the Spontaneous Initiation of Carbon Detonations in White Dwarfs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garg, Uma; Chang, Philip, E-mail: umagarg@uwm.edu, E-mail: chang65@uwm.edu
Despite over 40 years of active research, the nature of the white dwarf progenitors of SNe Ia remains unclear. However, in the last decade, various progenitor scenarios have highlighted the need for detonations to be the primary mechanism by which these white dwarfs are consumed, but it is unclear how these detonations are triggered. In this paper we study how detonations are spontaneously initiated due to temperature inhomogeneities, e.g., hotspots, in burning nuclear fuel in a simplified physical scenario. Following the earlier work by Zel’Dovich, we describe the physics of detonation initiation in terms of the comparison between the spontaneousmore » wave speed and the Chapman–Jouguet speed. We develop an analytic expression for the spontaneous wave speed and utilize it to determine a semi-analytic criterion for the minimum size of a hotspot with a linear temperature gradient between a peak and base temperature for which detonations in burning carbon–oxygen material can occur. Our results suggest that spontaneous detonations may easily form under a diverse range of conditions, likely allowing a number of progenitor scenarios to initiate detonations that burn up the star.« less
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
A Semi-analytic Criterion for the Spontaneous Initiation of Carbon Detonations in White Dwarfs
NASA Astrophysics Data System (ADS)
Garg, Uma; Chang, Philip
2017-02-01
Despite over 40 years of active research, the nature of the white dwarf progenitors of SNe Ia remains unclear. However, in the last decade, various progenitor scenarios have highlighted the need for detonations to be the primary mechanism by which these white dwarfs are consumed, but it is unclear how these detonations are triggered. In this paper we study how detonations are spontaneously initiated due to temperature inhomogeneities, e.g., hotspots, in burning nuclear fuel in a simplified physical scenario. Following the earlier work by Zel’Dovich, we describe the physics of detonation initiation in terms of the comparison between the spontaneous wave speed and the Chapman-Jouguet speed. We develop an analytic expression for the spontaneous wave speed and utilize it to determine a semi-analytic criterion for the minimum size of a hotspot with a linear temperature gradient between a peak and base temperature for which detonations in burning carbon-oxygen material can occur. Our results suggest that spontaneous detonations may easily form under a diverse range of conditions, likely allowing a number of progenitor scenarios to initiate detonations that burn up the star.
Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter
2014-12-01
Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.
Moran, John L; Solomon, Patricia J
2012-05-16
For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators. There was a general tendency for over-prediction (negative residuals) and for over-fitting, the exception being the GLM negative binomial estimator. The mean-variance function was best approximated by a quadratic function, consistent with log-scale estimation; the link function was estimated (EEE) as 0.152(0.019, 0.285), consistent with a fractional-root function. For ICU length of stay, log-scale estimation, in particular the LMM, appeared to be the most consistently performing estimator(s). Neither the GLM variants nor the skew-regression estimators dominated.
Future mission studies: Preliminary comparisons of solar flux models
NASA Technical Reports Server (NTRS)
Ashrafi, S.
1991-01-01
The results of comparisons of the solar flux models are presented. (The wavelength lambda = 10.7 cm radio flux is the best indicator of the strength of the ionizing radiations such as solar ultraviolet and x-ray emissions that directly affect the atmospheric density thereby changing the orbit lifetime of satellites. Thus, accurate forecasting of solar flux F sub 10.7 is crucial for orbit determination of spacecrafts.) The measured solar flux recorded by National Oceanic and Atmospheric Administration (NOAA) is compared against the forecasts made by Schatten, MSFC, and NOAA itself. The possibility of a combined linear, unbiased minimum-variance estimation that properly combines all three models into one that minimizes the variance is also discussed. All the physics inherent in each model are combined. This is considered to be the dead-end statistical approach to solar flux forecasting before any nonlinear chaotic approach.
NASA Astrophysics Data System (ADS)
Sun, Xuelian; Liu, Zixian
2016-02-01
In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.
Demodulation of messages received with low signal to noise ratio
NASA Astrophysics Data System (ADS)
Marguinaud, A.; Quignon, T.; Romann, B.
The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.
An adaptive technique for estimating the atmospheric density profile during the AE mission
NASA Technical Reports Server (NTRS)
Argentiero, P.
1973-01-01
A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.
Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system
NASA Astrophysics Data System (ADS)
Bai, Jianbo; Li, Yang; Chen, Jianhao
2018-02-01
The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.
Almkvist, Ove; Bosnes, Ole; Bosnes, Ingunn; Stordal, Eystein
2017-05-09
Subjective memory is commonly considered to be a unidimensional measure. However, theories of performance-based memory suggest that subjective memory could be divided into more than one dimension. To divide subjective memory into theoretically related components of memory and explore the relationship to disease. In this study, various aspects of self-reported memory were studied with respect to demographics and diseases in the third wave of the HUNT epidemiological study in middle Norway. The study included all individuals 55 years of age or older, who responded to a nine-item questionnaire on subjective memory and questionnaires on health (n=18 633). A principle component analysis of the memory items resulted in two memory components; the criterion used was an eigenvalue above 1, which accounted for 54% of the total variance. The components were interpreted as long-term memory (LTM; the first component; 43% of the total variance) and short-term memory (STM; the second component; 11% of the total variance). Memory impairment was significantly related to all diseases (except Bechterew's disease), most strongly to brain infarction, heart failure, diabetes, cancer, chronic obstructive pulmonary disease and whiplash. For most diseases, the STM component was more affected than the LTM component; however, in cancer, the opposite pattern was seen. Subjective memory impairment as measured in HUNT contained two components, which were differentially associated with diseases. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Netz, Yael; Dunsky, Ayelet; Zach, Sima; Goldsmith, Rebecca; Shimony, Tal; Goldbourt, Uri; Zeev, Aviva
2012-12-01
Official health organizations have established the dose of physical activity needed for preserving both physical and psychological health in old age. The objective of this study was to explore whether adherence to the recommended criterion of physical activity accounted for better psychological functioning in older adults in Israel. A random sample of 1,663 (799 men) Israelis reported their physical activity routine, and based on official guidelines were divided into sufficiently active, insufficiently active, and inactive groups. The General Health Questionnaire (GHQ) was used for assessing mental health and the Mini-Mental State Examination (MMSE) for assessing cognitive functioning. Factor analysis performed on the GHQ yielded two factors - positive and negative. Logistic regressions for the GHQ factors and for the MMSE were conducted for explaining their variance, with demographic variables entered first, followed by health and then physical activity. The explained variance in the three steps was Cox and Snell R2 = 0.022, 0.023, 0.039 for the positive factor, 0.066, 0.093, 0.101 for the negative factor, and 0.204, 0.206, 0.209 for the MMSE. Adherence to the recommended dose of physical activity accounted for better psychological functioning beyond demographic and health variables; however, the additional explained variance was small. More specific guidelines of physical activity may elucidate a stronger relationship, but only randomized controlled trials can reveal cause-effect relationship between physical activity and psychological functioning. More studies are needed focusing on the positive factor of psychological functioning.
NASA Astrophysics Data System (ADS)
Larry, Triaka A.
The need for more diversity in STEM-related careers and college majors is urgent. Self-efficacy and student-teacher relationships are factors that have been linked to influencing students’ pursuit of subject-specific careers and academic achievement. The impact of self-efficacy and student perceptions of teacher interpersonal behaviors on student achievement have been extensively researched in the areas of Mathematics and English, however, most studies using science achievement, as a criterion variable, were conducted using non-diverse, White upper middle class to affluent participants. In order to determine the strength of relationships between perceived science self-efficacy, and student perceptions of teacher interpersonal behaviors as factors that influence science achievement (science GPA), the Science Self-Efficacy Questionnaire (SSEQ) and Questionnaire on Teacher Interactions (QTI) were administered to twelfth grade students enrolled at a highly diverse urban Title I high school, while controlling for demographics, defined as gender, ethnicity, and minority status. Using a hierarchical multiple linear regression analysis, results demonstrated that the predictor variables (i.e., gender, ethnicity, minority status, science self-efficacy, and teacher interpersonal behaviors) accounted for 20.8% of the variance in science GPAs. Science self-efficacy made the strongest unique contribution to explaining science GPA, while minority status and gender were found to be statistically significant contributors to the full model as well. Ethnicity and teacher interpersonal behaviors did not make a statistically significant contribution to the variance in science GPA, and accounted for ≤ 1% of the variance. Implications and recommendations for future research are subsequently given.
Anthropometry as a predictor of vertical jump heights derived from an instrumented platform.
Caruso, John F; Daily, Jeremy S; Mason, Melissa L; Shepherd, Catherine M; McLagan, Jessica R; Marshall, Mallory R; Walker, Ron H; West, Jason O
2012-01-01
The current study purpose examined the vertical height-anthropometry relationship with jump data obtained from an instrumented platform. Our methods required college-aged (n = 177) subjects to make 3 visits to our laboratory to measure the following anthropometric variables: height, body mass, upper arm length (UAL), lower arm length, upper leg length, and lower leg length. Per jump, maximum height was measured in 3 ways: from the subjects' takeoff, hang times, and as they landed on the platform. Standard multivariate regression assessed how well anthropometry predicted the criterion variance per gender (men, women, pooled) and jump height method (takeoff, hang time, landing) combination. Z-scores indicated that small amounts of the total data were outliers. The results showed that the majority of outliers were from jump heights calculated as women landed on the platform. With the genders pooled, anthropometry predicted a significant (p < 0.05) amount of variance from jump heights calculated from both takeoff and hang time. The anthropometry-vertical jump relationship was not significant from heights calculated as subjects landed on the platform, likely due to the female outliers. Yet anthropometric data of men did predict a significant amount of variance from heights calculated when they landed on the platform; univariate correlations of men's data revealed that UAL was the best predictor. It was concluded that the large sample of men's data led to greater data heterogeneity and a higher univariate correlation. Because of our sample size and data heterogeneity, practical applications suggest that coaches may find our results best predict performance for a variety of college-aged athletes and vertical jump enthusiasts.
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lekking without a paradox in the buff-breasted sandpiper
Lanctot, Richard B.; Scribner, Kim T.; Kempenaers, Bart; Weatherhead, Patrick J.
1997-01-01
Females in lek‐breeding species appear to copulate with a small subset of the available males. Such strong directional selection is predicted to decrease additive genetic variance in the preferred male traits, yet females continue to mate selectively, thus generating the lek paradox. In a study of buff‐breasted sandpipers (Tryngites subruficollis), we combine detailed behavioral observations with paternity analyses using single‐locus minisatellite DNA probes to provide the first evidence from a lek‐breeding species that the variance in male reproductive success is much lower than expected. In 17 and 30 broods sampled in two consecutive years, a minimum of 20 and 39 males, respectively, sired offspring. This low variance in male reproductive success resulted from effective use of alternative reproductive tactics by males, females mating with solitary males off leks, and multiple mating by females. Thus, the results of this study suggests that sexual selection through female choice is weak in buff‐breasted sandpipers. The behavior of other lek‐breeding birds is sufficiently similar to that of buff‐breasted sandpipers that paternity studies of those species should be conducted to determine whether leks generally are less paradoxical than they appear.
Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming
Karlinger, M.R.; Skrivan, James A.
1981-01-01
Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
Signal-dependent noise determines motor planning
NASA Astrophysics Data System (ADS)
Harris, Christopher M.; Wolpert, Daniel M.
1998-08-01
When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.
NASA Astrophysics Data System (ADS)
Ashworth, J. R.; Sheplev, V. S.
1997-09-01
Layered coronas between two reactant minerals can, in many cases, be attributed to diffusion-controlled growth with local equilibrium. This paper clarifies and unifies the previous approaches of various authors to the simplest form of modelling, which uses no assumed values for thermochemical quantities. A realistic overall reaction must be estimated from measured overall proportions of minerals and their major element compositions. Modelling is not restricted to a particular number of components S, relative to the number of phases Φ. IfΦ > S + 1, the overall reaction is a combination of simultaneous reactions. The stepwise method, solving for the local reaction at each boundary in turn, is extended to allow for recurrence of a mineral (its presence in two parts of the layer structure separated by a gap). The equations are also given in matrix form. A thermodynamic stability criterion is derived, determining which layer sequence is truly stable if several are computable from the same inputs. A layer structure satisfying the stability criterion has greater growth rate (and greater rate of entropy production) than the other computable layer sequences. This criterion of greatest entropy production is distinct from Prigogine's theorem of minimum entropy production, which distinguishes the stationary or quasi-stationary state from other states of the same layer sequence. The criterion leads to modification of previous results for coronas comprising hornblende, spinel, and orthopyroxene between olivine (Ol) and plagioclase (Pl). The outcome supports the previous inference that Si, and particularly Al, commonly behave as immobile relative to other cation-forming major elements. The affinity (-ΔG) of a corona-forming reaction is estimated, using previous estimates of diffusion coefficient and the duration t of reaction, together with a new model quantity (-ΔG) *. For an example of the Ol + Pl reaction, a rough calculation gives (-ΔG) > 1.7RT (per mole of P1 consumed, based on a 24-oxygen formula for Pl). At 600-700°C, this represents (-ΔG) > 10kJ mol -1 and departure from equilibrium temperature by at least ˜ 100°C. The lower end of this range is petrologically reasonable and, for t < 100Ma, corresponds to a Fick's-law diffusion coefficient for Al, DAl > 10 -25m 2s -1, larger than expected for lattice diffusion but consistent with fluid-absent grain-boundary diffusion and small concentration gradients.
On the problem of data assimilation by means of synchronization
NASA Astrophysics Data System (ADS)
Szendro, Ivan G.; RodríGuez, Miguel A.; López, Juan M.
2009-10-01
The potential use of synchronization as a method for data assimilation is investigated in a Lorenz96 model. Data representing the reality are obtained from a Lorenz96 model with added noise. We study the assimilation scheme by means of synchronization for different noise intensities. We use a novel plot representation of the synchronization error in a phase diagram consisting of two variables: the amplitude and the width of the error after a suitable logarithmic transformation (the so-called mean-variance of logarithms diagram). Our main result concerns the existence of an "optimal" coupling for which the synchronization is maximal. We finally show how this allows us to quantify the degree of assimilation, providing a criterion for the selection of optimal couplings and validity of models.
Further evidence for a broader concept of somatization disorder using the somatic symptom index.
Hiller, W; Rief, W; Fichter, M M
1995-01-01
Somatization syndromes were defined in a sample of 102 psychosomatic inpatients according to the restrictive criteria of DSM-III-R somatization disorder and the broader diagnostic concept of the Somatic Symptom Index (SSI). Both groups showed a qualitatively similar pattern of psychopathological comorbidity and had elevated scores on measures of depression, hypochondriasis, and anxiety. A good discrimination between mild and severe forms of somatization was found by using the SSI criterion. SSI use accounted for a substantial amount of comorbidity variance, with rates of 15%-20% for depression, 16% for hypochondriasis, and 13% for anxiety. The results provide further evidence for the validity of the SSI concept, which reflects the clinical relevance of somatization in addition to the narrow definition of somatization disorder.
WE-AB-207A-12: HLCC Based Quantitative Evaluation Method of Image Artifact in Dental CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y; Wu, S; Qi, H
Purpose: Image artifacts are usually evaluated qualitatively via visual observation of the reconstructed images, which is susceptible to subjective factors due to the lack of an objective evaluation criterion. In this work, we propose a Helgason-Ludwig consistency condition (HLCC) based evaluation method to quantify the severity level of different image artifacts in dental CBCT. Methods: Our evaluation method consists of four step: 1) Acquire Cone beam CT(CBCT) projection; 2) Convert 3D CBCT projection to fan-beam projection by extracting its central plane projection; 3) Convert fan-beam projection to parallel-beam projection utilizing sinogram-based rebinning algorithm or detail-based rebinning algorithm; 4) Obtain HLCCmore » profile by integrating parallel-beam projection per view and calculate wave percentage and variance of the HLCC profile, which can be used to describe the severity level of image artifacts. Results: Several sets of dental CBCT projections containing only one type of artifact (i.e. geometry, scatter, beam hardening, lag and noise artifact), were simulated using gDRR, a GPU tool developed for efficient, accurate, and realistic simulation of CBCT Projections. These simulated CBCT projections were used to test our proposed method. HLCC profile wave percentage and variance induced by geometry distortion are about 3∼21 times and 16∼393 times as large as that of the artifact-free projection, respectively. The increase factor of wave percentage and variance are 6 and133 times for beam hardening, 19 and 1184 times for scatter, and 4 and16 times for lag artifacts, respectively. In contrast, for noisy projection the wave percentage, variance and inconsistency level are almost the same with those of the noise-free one. Conclusion: We have proposed a quantitative evaluation method of image artifact based on HLCC theory. According to our simulation results, the severity of different artifact types is found to be in a following order: Scatter>Geometry>Beam hardening>Lag>Noise>Artifact-free in dental CBCT.« less
Classification of resistance to passive motion using minimum probability of error criterion.
Chan, H C; Manry, M T; Kondraske, G V
1987-01-01
Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.
Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo
2010-05-01
The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.
Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box
NASA Astrophysics Data System (ADS)
Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan
2016-09-01
The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.
Computer program grade 2 for the design and analysis of heat-pipe wicks
NASA Technical Reports Server (NTRS)
Eninger, J. E.; Edwards, D. K.
1976-01-01
This user's manual describes the revised version of the computer program GRADE(1), which designs and analyzes heat pipes with graded porosity fibrous slab wicks. The revisions are: (1) automatic calculation of the minimum condenser-end stress that will not result in an excess-liquid puddle or a liquid slug in the vapor space; (2) numerical solution of the equations describing flow in the circumferential grooves to assess the burnout criterion; (3) calculation of the contribution of excess liquid in fillets and puddles to the heat-transport; (4) calculation of the effect of partial saturation on the wick performance; and (5) calculation of the effect of vapor flow, which includes viscousinertial interactions.
Transition of planar Couette flow at infinite Reynolds numbers.
Itano, Tomoaki; Akinaga, Takeshi; Generalis, Sotos C; Sugihara-Seki, Masako
2013-11-01
An outline of the state space of planar Couette flow at high Reynolds numbers (Re<10^{5}) is investigated via a variety of efficient numerical techniques. It is verified from nonlinear analysis that the lower branch of the hairpin vortex state (HVS) asymptotically approaches the primary (laminar) state with increasing Re. It is also predicted that the lower branch of the HVS at high Re belongs to the stability boundary that initiates a transition to turbulence, and that one of the unstable manifolds of the lower branch of HVS lies on the boundary. These facts suggest HVS may provide a criterion to estimate a minimum perturbation arising transition to turbulent states at the infinite Re limit.
Scaling laws for ignition at the National Ignition Facility from first principles.
Cheng, Baolian; Kwan, Thomas J T; Wang, Yi-Ming; Batha, Steven H
2013-10-01
We have developed an analytical physics model from fundamental physics principles and used the reduced one-dimensional model to derive a thermonuclear ignition criterion and implosion energy scaling laws applicable to inertial confinement fusion capsules. The scaling laws relate the fuel pressure and the minimum implosion energy required for ignition to the peak implosion velocity and the equation of state of the pusher and the hot fuel. When a specific low-entropy adiabat path is used for the cold fuel, our scaling laws recover the ignition threshold factor dependence on the implosion velocity, but when a high-entropy adiabat path is chosen, the model agrees with recent measurements.
Large space structure damping design
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Haviland, J. K.
1983-01-01
Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.
Creep rupture of polymer-matrix composites
NASA Technical Reports Server (NTRS)
Brinson, H. F.; Morris, D. H.; Griffith, W. I.
1981-01-01
The time-dependent creep-rupture process in graphite-epoxy laminates is examined as a function of temperature and stress level. Moisture effects are not considered. An accelerated characterization method of composite-laminate viscoelastic modulus and strength properties is reviewed. It is shown that lamina-modulus master curves can be obtained using a minimum of normally performed quality-control-type testing. Lamina-strength master curves, obtained by assuming a constant-strain-failure criterion, are presented along with experimental data, and reasonably good agreement is shown to exist between the two. Various phenomenological delayed failure models are reviewed and two (the modified rate equation and the Larson-Miller parameter method) are compared to creep-rupture data with poor results.
Optimal design of gas adsorption refrigerators for cryogenic cooling
NASA Technical Reports Server (NTRS)
Chan, C. K.
1983-01-01
The design of gas adsorption refrigerators used for cryogenic cooling in the temperature range of 4K to 120K was examined. The functional relationships among the power requirement for the refrigerator, the system mass, the cycle time and the operating conditions were derived. It was found that the precool temperature, the temperature dependent heat capacities and thermal conductivities, and pressure and temperature variations in the compressors have important impacts on the cooling performance. Optimal designs based on a minimum power criterion were performed for four different gas adsorption refrigerators and a multistage system. It is concluded that the estimates of the power required and the system mass are within manageable limits in various spacecraft environments.
Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.
Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R
2006-02-28
The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.
James, C P; Harburn, K L; Kramer, J F
1997-08-01
This study addresses test-retest reliability of the Postural and Repetitive Risk-Factors Index (PRRI) for work-related upper body injuries. This assessment was developed by the present authors. A repeated measures design was used to assess the test-retest reliability of a videotaped work-site assessment of subjects' movements. Ten heavy users of video display terminals (VDTs) from a local banking industry participated in the study. The 10 subjects' movements were videotaped for 2 hours on each of 2 separate days, while working on-site at their VDTs. The videotaped assessment, which utilized known postural risk factors for developing musculoskeletal disorder, pain, and discomfort in heavy VDT users (ie, repetitiveness, awkward and static postures, and contraction time), was called the PRRI. The videotaped movement assessments were subsequently analyzed in 15-minute sessions (five sessions per 2-hour videotape, which produced a total of 10 sessions over the 2 testing days), and each session was chosen randomly from the videotape. The subjects' movements were given a postural risk score according to the criteria in the PRRI. Each subject was therefore tested a total of 10 times (ie, 10 sessions), over two days. The maximum PRRI score for both sides of the body was 216 points. Reliability coefficients (RCs) for the PRRI scores were calculated, and the reliability of any one session met the minimum criterion for excellent reliability, which was .75. A two-way analysis of variance (ANOVA) confirmed that there was no statistically significant difference between sessions (p < .05). Calculations using the standard error of measurement (SEM) indicated that an individual tested once, on one day and with a PRRI score of 25, required a change of at least 8 points in order to be confident that a true change in score had occurred. The significant results from the reliability tests indicated that the PRRI was a reliable measurement tool that could be used by occupational health practitioners on the job site.
Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.
Durdu, Omer Faruk
2010-10-01
In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.
Benthic macroinvertebrate field sampling effort required to ...
This multi-year pilot study evaluated a proposed field method for its effectiveness in the collection of a benthic macroinvertebrate sample adequate for use in the condition assessment of streams and rivers in the Neuquén Province, Argentina. A total of 13 sites, distributed across three rivers, were sampled. At each site, benthic macroinvertebrates were collected at 11 transects. Each sample was processed independently in the field and laboratory. Based on a literature review and resource considerations, the collection of 300 organisms (minimum) at each site was determined to be necessary to support a robust condition assessment, and therefore, selected as the criterion for judging the adequacy of the method. This targeted number of organisms was collected at all sites, at a minimum, when collections from all 11 transects were combined. Subsequent bootstrapping analysis of data was used to estimate whether collecting at fewer transects would reach the minimum target number of organisms for all sites. In a subset of sites, the total number of organisms frequently fell below the target when fewer than 11 transects collections were combined.Site conditions where <300 organisms might be collected are discussed. These preliminary results suggest that the proposed field method results in a sample that is adequate for robust condition assessment of the rivers and streams of interest. When data become available from a broader range of sites, the adequacy of the field
3D facial landmarks: Inter-operator variability of manual annotation
2014-01-01
Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436
Estimation of stable boundary-layer height using variance processing of backscatter lidar data
NASA Astrophysics Data System (ADS)
Saeed, Umar; Rocadenbosch, Francesc
2017-04-01
Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.
Measuring the Power Spectrum with Peculiar Velocities
NASA Astrophysics Data System (ADS)
Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-01-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
Power spectrum estimation from peculiar velocity catalogues
NASA Astrophysics Data System (ADS)
Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-09-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi
2016-03-01
This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.
Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel
2008-01-01
Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072
Investigation into the performance of different models for predicting stutter.
Bright, Jo-Anne; Curran, James M; Buckleton, John S
2013-07-01
In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Čatipović, Marija; Marković, Martina; Grgurić, Josip
2018-04-27
Validating a questionnaire/instrument before proceeding to the field for data collection is important. An 18-item breastfeeding intention, 39-item attitude and 44-item knowledge questionnaire was validated in a Croatian sample of secondary-school students ( N = 277). For the intentions, principal component analysis (PCA) yielded a four-factor solution with 8 items explaining 68.3% of the total variance. Cronbach’s alpha (0.71) indicated satisfactory internal consistency. For the attitudes, PCA showed a seven-factor structure with 33 items explaining 58.41% of total variance. Cronbach’s alpha (0.87) indicated good internal consistency. There were 13 knowledge questions that were retained after item analysis, showing good internal consistency (KR20 = 0.83). In terms of criterion validity, the questionnaire differentiated between students who received breastfeeding education compared to students who were not educated in breastfeeding. Correlations between intentions and attitudes (r = 0.49), intentions and knowledge (r = 0.29), and attitudes and knowledge (r = 0.38) confirmed concurrent validity. The final instrument is reliable and valid for data collection on breastfeeding. Therefore, the instrument is recommended for evaluation of breastfeeding education programs aimed at upper-grade elementary and secondary school students.
Marković, Martina; Grgurić, Josip
2018-01-01
Background: Validating a questionnaire/instrument before proceeding to the field for data collection is important. Methods: An 18-item breastfeeding intention, 39-item attitude and 44-item knowledge questionnaire was validated in a Croatian sample of secondary-school students (N = 277). Results: For the intentions, principal component analysis (PCA) yielded a four-factor solution with 8 items explaining 68.3% of the total variance. Cronbach’s alpha (0.71) indicated satisfactory internal consistency. For the attitudes, PCA showed a seven-factor structure with 33 items explaining 58.41% of total variance. Cronbach’s alpha (0.87) indicated good internal consistency. There were 13 knowledge questions that were retained after item analysis, showing good internal consistency (KR20 = 0.83). In terms of criterion validity, the questionnaire differentiated between students who received breastfeeding education compared to students who were not educated in breastfeeding. Correlations between intentions and attitudes (r = 0.49), intentions and knowledge (r = 0.29), and attitudes and knowledge (r = 0.38) confirmed concurrent validity. Conclusions: The final instrument is reliable and valid for data collection on breastfeeding. Therefore, the instrument is recommended for evaluation of breastfeeding education programs aimed at upper-grade elementary and secondary school students. PMID:29702616
Wu, Wenzheng; Ye, Wenli; Wu, Zichao; Geng, Peng; Wang, Yulei; Zhao, Ji
2017-01-01
The success of the 3D-printing process depends upon the proper selection of process parameters. However, the majority of current related studies focus on the influence of process parameters on the mechanical properties of the parts. The influence of process parameters on the shape-memory effect has been little studied. This study used the orthogonal experimental design method to evaluate the influence of the layer thickness H, raster angle θ, deformation temperature Td and recovery temperature Tr on the shape-recovery ratio Rr and maximum shape-recovery rate Vm of 3D-printed polylactic acid (PLA). The order and contribution of every experimental factor on the target index were determined by range analysis and ANOVA, respectively. The experimental results indicated that the recovery temperature exerted the greatest effect with a variance ratio of 416.10, whereas the layer thickness exerted the smallest effect on the shape-recovery ratio with a variance ratio of 4.902. The recovery temperature exerted the most significant effect on the maximum shape-recovery rate with the highest variance ratio of 1049.50, whereas the raster angle exerted the minimum effect with a variance ratio of 27.163. The results showed that the shape-memory effect of 3D-printed PLA parts depended strongly on recovery temperature, and depended more weakly on the deformation temperature and 3D-printing parameters. PMID:28825617
Genetic parameters of legendre polynomials for first parity lactation curves.
Pool, M H; Janss, L L; Meuwissen, T H
2000-11-01
Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.
Messner, Steven F.; Raffalovich, Lawrence E.; Sutton, Gretchen M.
2011-01-01
This paper assesses the extent to which the infant mortality rate might be treated as a “proxy” for poverty in research on cross-national variation in homicide rates. We have assembled a pooled, cross-sectional time-series dataset for 16 advanced nations over the 1993–2000 period that includes standard measures of infant mortality and homicide and also contains information on two commonly used “income-based” poverty measures: a measure intended to reflect “absolute” deprivation and a measure intended to reflect “relative” deprivation. With these data, we are able to assess the criterion validity of the infant mortality rate with reference to the two income-based poverty measures. We are also able to estimate the effects of the various indicators of disadvantage on homicide rates in regression models, thereby assessing construct validity. The results reveal that the infant mortality rate is more strongly correlated with “relative poverty” than with “absolute poverty,” although much unexplained variance remains. In the regression models, the measure of infant mortality and the relative poverty measure yield significant positive effects on homicide rates, while the absolute poverty measure does not exhibit any significant effects. Our analyses suggest that it would be premature to dismiss relative deprivation in cross-national research on homicide, and that disadvantage is best conceptualized and measured as a multidimensional construct. PMID:21643432
Lindström, Martin; Axén, Elin; Lindström, Christine; Beckman, Anders; Moghaddassi, Mahnaz; Merlo, Juan
2006-12-01
The aim of this study was to investigate the influence of contextual (social capital and administrative/neo-materialist) and individual factors on lack of access to a regular doctor. The 2000 public health survey in Scania is a cross-sectional study. A total of 13,715 persons answered a postal questionnaire, which is 59% of the random sample. A multilevel logistic regression model, with individuals at the first level and municipalities at the second, was performed. The effect (intra-class correlations, cross-level modification and odds ratios) of individual and municipality (social capital and health care district) factors on lack of access to a regular doctor was analysed using simulation method. The Deviance Information Criterion (DIC) was used as information criterion for the models. The second level municipality variance in lack of access to a regular doctor is substantial even in the final models with all individual and contextual variables included. The model that results in the largest reduction in DIC is the model including age, sex and individual social participation (which is a network aspect of social capital), but the models which include administrative and social capital second level factors also reduced the DIC values. This study suggests that both administrative health care district and social capital may partly explain the individual's self reported lack of access to a regular doctor.
Optimal consistency in microRNA expression analysis using reference-gene-based normalization.
Wang, Xi; Gardiner, Erin J; Cairns, Murray J
2015-05-01
Normalization of high-throughput molecular expression profiles secures differential expression analysis between samples of different phenotypes or biological conditions, and facilitates comparison between experimental batches. While the same general principles apply to microRNA (miRNA) normalization, there is mounting evidence that global shifts in their expression patterns occur in specific circumstances, which pose a challenge for normalizing miRNA expression data. As an alternative to global normalization, which has the propensity to flatten large trends, normalization against constitutively expressed reference genes presents an advantage through their relative independence. Here we investigated the performance of reference-gene-based (RGB) normalization for differential miRNA expression analysis of microarray expression data, and compared the results with other normalization methods, including: quantile, variance stabilization, robust spline, simple scaling, rank invariant, and Loess regression. The comparative analyses were executed using miRNA expression in tissue samples derived from subjects with schizophrenia and non-psychiatric controls. We proposed a consistency criterion for evaluating methods by examining the overlapping of differentially expressed miRNAs detected using different partitions of the whole data. Based on this criterion, we found that RGB normalization generally outperformed global normalization methods. Thus we recommend the application of RGB normalization for miRNA expression data sets, and believe that this will yield a more consistent and useful readout of differentially expressed miRNAs, particularly in biological conditions characterized by large shifts in miRNA expression.
NASA Astrophysics Data System (ADS)
Caballero, Rafael; Gil, Ángel; Fernández-Santos, Xavier
2008-08-01
European Large Scale Grazing Systems (LSGS) are at a crossroad with environmental, agronomic, and social factors interacting on their future viability. This research assesses the current environmental and socio-economic status of a wide range of European LSGS according to an agreed subset of sustainability criteria and indicators, which have been recognized by corresponding experts and privileged observers on their respective case-study system. A survey questionnaire was drafted containing five main criteria (pastoral use, environmental, economic, social, and market and development), with four conceptual-scored variables (indicators) within each criterion. Descriptive, analytical and clustering statistical techniques helped to draw a synthesis of the main result and to standardize sustainability variables across different biogeographical regions and management situations. The results show large multicollinearity among the 20 variables proposed. This dependence was revealed by the reduction to six main factor-components, which accounted for about 73% of the total variance in responses. Aggregation of point-score indicators across criteria to obtain a sustainability index can be of less policy relevance than responses to specific criteria or indicators. Affinity between case-study systems, as judged by collaborative-expert responses, was not related to biogeographical location, operating livestock sector, or population density in their areas. The results show larger weaknesses and constraints in the economic and social criteria than in the pastoral and environmental criteria, and the large heterogeneity of responses appears in the social criterion.
Saffari, Mohsen; Naderi, Maryam K; Piper, Crystal N; Koenig, Harold G
There is no valid and well-established tool to measure fatigue in people with chronic hepatitis B. The aim of this study was to translate the Multidimensional Fatigue Inventory (MFI) into Persian and examine its reliability and validity in Iranian people with chronic hepatitis B. The demographic questionnaire and MFI, as well as Chronic Liver Disease Questionnaire and EuroQol-5D (to assess criterion validity), were administered in face-to-face interviews with 297 participants. A forward-backward translation method was used to develop a culturally adapted Persian version of the questionnaire. Cronbach's α was used to assess the internal reliability of the scale. Pearson correlation was used to assess criterion validity, and known-group method was used along with factor analysis to establish construct validity. Cronbach's α for the total scale was 0.89. Convergent and discriminant validities were also established. Correlations between the MFI and the health-related quality of life scales were significant (p < .01). The scale differentiated between subgroups of persons with the hepatitis B infection in terms of age, gender, employment, education, disease duration, and stage of disease. Factor analysis indicated a four-factor solution for the scale that explained 60% of the variance. The MFI is a valid and reliable instrument to identify fatigue in Iranians with hepatitis B.
Geller, Josie; Brown, Krista E; Srikameswaran, Suja; Piper, William; Dunn, Erin C
2013-09-01
Readiness for change, as assessed by the readiness and motivation interview (RMI), predicts a number of clinical outcome variables in eating disorders including enrollment in intensive treatment, symptom change, dropout, and relapse. Although clinically useful, the training and administration of the RMI is time consuming. The purpose of this research was to (a) develop a self-report, symptom-specific version of the RMI, the readiness and motivation questionnaire (RMQ), that can be used to assess readiness for change across all eating disorder diagnoses and (b) establish its psychometric properties. The RMQ provides stage of change, internality, and confidence scores for each of 4 eating disorder symptom domains (restriction, bingeing, and cognitive and compensatory behaviors). Individuals (N = 244) with current eating disorder diagnoses completed the RMQ and measures of convergent, discriminant, and criterion validity. Similar to the RMI scores, readiness scores on the RMQ differed according to symptom domain. Regarding criterion validity, RMQ scores were significantly associated with ratings of anticipated difficulty of recovery activities and completion of recovery activities. The RMQ contributed significant unique variance to anticipated difficulty of recovery activities, beyond those accounted for by the RMI and a questionnaire measure of global readiness. The RMQ is thus an acceptable alternative to the RMI, providing global and domain-specific readiness information when time or cost prohibits use of an interview.
Development and Validation of the Five-by-Five Resilience Scale.
DeSimone, Justin A; Harms, P D; Vanhove, Adam J; Herian, Mitchel N
2017-09-01
This article introduces a new measure of resilience and five related protective factors. The Five-by-Five Resilience Scale (5×5RS) is developed on the basis of theoretical and empirical considerations. Two samples ( N = 475 and N = 613) are used to assess the factor structure, reliability, convergent validity, and criterion-related validity of the 5×5RS. Confirmatory factor analysis supports a bifactor model. The 5×5RS demonstrates adequate internal consistency as evidenced by Cronbach's alpha and empirical reliability estimates. The 5×5RS correlates positively with the Connor-Davidson Resilience Scale (CD-RISC), a commonly used measure of resilience. The 5×5RS exhibits similar criterion-related validity to the CD-RISC as evidenced by positive correlations with satisfaction with life, meaning in life, and secure attachment style as well as negative correlations with rumination and anxious or avoidant attachment styles. 5×5RS scores are positively correlated with healthy behaviors such as exercise and negatively correlated with sleep difficulty and symptomology of anxiety and depression. The 5×5RS incrementally explains variance in some criteria above and beyond the CD-RISC. Item responses are modeled using the graded response model. Information estimates demonstrate the ability of the 5×5RS to assess individuals within at least one standard deviation of the mean on relevant latent traits.
Differentiating corporal punishment from physical abuse in the prediction of lifetime aggression.
King, Alan R; Ratzak, Abrianna; Ballantyne, Sage; Knutson, Shane; Russell, Tiffany D; Pogalz, Colton R; Breen, Cody M
2018-05-01
Corporal punishment and parental physical abuse often co-occur during upbringing, making it difficult to differentiate their selective impacts on psychological functioning. Associations between corporal punishment and a number of lifetime aggression indicators were examined in this study after efforts to control the potential influence of various forms of co-occurring maltreatment (parental physical abuse, childhood sexual abuse, sibling abuse, peer bullying, and observed parental violence). College students (N = 1,136) provided retrospective self-reports regarding their history of aggression and levels of exposure to childhood corporal punishment and maltreatment experiences. Analyses focused on three hypotheses: 1) The odds of experiencing childhood physical abuse would be higher among respondents reporting frequent corporal punishment during upbringing; 2) Corporal punishment scores would predict the criterion aggression indices after control of variance associated with childhood maltreatment; 3) Aggression scores would be higher among respondents classified in the moderate and elevated corporal punishment risk groups. Strong support was found for the first hypothesis since the odds of childhood physical abuse recollections were higher (OR = 65.3) among respondents who experienced frequent (>60 total disciplinary acts) corporal punishment during upbringing. Partial support was found for the second and third hypotheses. Dimensional and categorical corporal punishment scores were associated significantly with half of the criterion measures. These findings support efforts to dissuade reliance on corporal punishment to manage child behavior. © 2018 Wiley Periodicals, Inc.
Almoqbel, Fahad M; Irving, Elizabeth L; Leat, Susan J
2017-08-01
The purpose of this study was to investigate the development of visual acuity (VA) and contrast sensitivity in children as measured with objective (sweep visually evoked potential) and subjective, psychophysical techniques, including signal detection theory (SDT), which attempts to control for differences in criterion or behavior between adults and children. Furthermore, this study examines the possibility of applying SDT methods with children. Visual acuity and contrast thresholds were measured in 12 children 6 to 7 years old, 10 children 8 to 9 years old, 10 children 10 to 12 years old, and 16 adults. For sweep visually evoked potential measurements, spatial frequency was swept from 1 to 40 cpd to measure VA, and contrast of sine-wave gratings (1 or 8 cpd) was swept from 0.33 to 30% to measure contrast thresholds. For psychophysical measurements, VA and contrast thresholds (1 or 8 cpd) were measured using a temporal two-alternative forced-choice staircase procedure and also with a yes-no SDT procedure. Optotype (logMAR [log of the minimum angle of resolution]) VA was also measured. The results of the various procedures were in agreement showing that there are age-related changes in threshold values and logMAR VA after the age of 6 years and that these visual functions do not become adult-like until the age of 8 to 9 years at the earliest. It was also found that children can participate in SDT procedures and do show differences in criterion compared with adults in psychophysical testing. These findings confirm a slightly later development of VA and contrast sensitivity (8 years or older) and indicate the importance of using SDT or forced-choice procedures in any developmental study to attempt to overcome the effect of criterion in children.
Urrestarazu, Jorge; Royo, José B.; Santesteban, Luis G.; Miranda, Carlos
2015-01-01
Fingerprinting information can be used to elucidate in a robust manner the genetic structure of germplasm collections, allowing a more rational and fine assessment of genetic resources. Bayesian model-based approaches are nowadays majorly preferred to infer genetic structure, but it is still largely unresolved how marker sets should be built in order to obtain a robust inference. The objective was to evaluate, in Pyrus germplasm collections, the influence of the SSR marker set size on the genetic structure inferred, also evaluating the influence of the criterion used to select those markers. Inferences were performed considering an increasing number of SSR markers that ranged from just two up to 25, incorporated one at a time into the analysis. The influence of the number of SSR markers used was evaluated comparing the number of populations and the strength of the signal detected, and also the similarity of the genotype assignments to populations between analyses. In order to test if those results were influenced by the criterion used to select the SSRs, several choosing scenarios based on the discrimination power or the fixation index values of the SSRs were tested. Our results indicate that population structure could be inferred accurately once a certain SSR number threshold was reached, which depended on the underlying structure within the genotypes, but the method used to select the markers included on each set appeared not to be very relevant. The minimum number of SSRs required to provide robust structure inferences and adequate measurements of the differentiation, even when low differentiation levels exist within populations, was proved similar to that of the complete list of recommended markers for fingerprinting. When a SSR set size similar to the minimum marker sets recommended for fingerprinting it is used, only major divisions or moderate (F ST>0.05) differentiation of the germplasm are detected. PMID:26382618
Distribution of nitrogen fixation and nitrogenase-like sequences amongst microbial genomes
2012-01-01
Background The metabolic capacity for nitrogen fixation is known to be present in several prokaryotic species scattered across taxonomic groups. Experimental detection of nitrogen fixation in microbes requires species-specific conditions, making it difficult to obtain a comprehensive census of this trait. The recent and rapid increase in the availability of microbial genome sequences affords novel opportunities to re-examine the occurrence and distribution of nitrogen fixation genes. The current practice for computational prediction of nitrogen fixation is to use the presence of the nifH and/or nifD genes. Results Based on a careful comparison of the repertoire of nitrogen fixation genes in known diazotroph species we propose a new criterion for computational prediction of nitrogen fixation: the presence of a minimum set of six genes coding for structural and biosynthetic components, namely NifHDK and NifENB. Using this criterion, we conducted a comprehensive search in fully sequenced genomes and identified 149 diazotrophic species, including 82 known diazotrophs and 67 species not known to fix nitrogen. The taxonomic distribution of nitrogen fixation in Archaea was limited to the Euryarchaeota phylum; within the Bacteria domain we predict that nitrogen fixation occurs in 13 different phyla. Of these, seven phyla had not hitherto been known to contain species capable of nitrogen fixation. Our analyses also identified protein sequences that are similar to nitrogenase in organisms that do not meet the minimum-gene-set criteria. The existence of nitrogenase-like proteins lacking conserved co-factor ligands in both diazotrophs and non-diazotrophs suggests their potential for performing other, as yet unidentified, metabolic functions. Conclusions Our predictions expand the known phylogenetic diversity of nitrogen fixation, and suggest that this trait may be much more common in nature than it is currently thought. The diverse phylogenetic distribution of nitrogenase-like proteins indicates potential new roles for anciently duplicated and divergent members of this group of enzymes. PMID:22554235
Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T
2017-07-01
Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.
VizieR Online Data Catalog: AGNs in submm-selected Lockman Hole galaxies (Serjeant+, 2010)
NASA Astrophysics Data System (ADS)
Serjeant, S.; Negrello, M.; Pearson, C.; Mortier, A.; Austermann, J.; Aretxaga, I.; Clements, D.; Chapman, S.; Dye, S.; Dunlop, J.; Dunne, L.; Farrah, D.; Hughes, D.; Lee, H. M.; Matsuhara, H.; Ibar, E.; Im, M.; Jeong, W.-S.; Kim, S.; Oyabu, S.; Takagi, T.; Wada, T.; Wilson, G.; Vaccari, M.; Yun, M.
2013-11-01
We present a comparison of the SCUBA half degree extragalactic survey (SHADES) at 450μm, 850μm and 1100μm with deep guaranteed time 15μm AKARI FU-HYU survey data and Spitzer guaranteed time data at 3.6-24μm in the Lockman hole east. The AKARI data was analysed using bespoke software based in part on the drizzling and minimum-variance matched filtering developed for SHADES, and was cross-calibrated against ISO fluxes. (2 data files).
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Sleep Reactivity and Insomnia: Genetic and Environmental Influences
Drake, Christopher L.; Friedman, Naomi P.; Wright, Kenneth P.; Roth, Thomas
2011-01-01
Study Objectives: Determine the genetic and environmental contributions to sleep reactivity and insomnia. Design: Population-based twin cohort. Participants: 1782 individual twins (988 monozygotic or MZ; 1,086 dizygotic or DZ), including 744 complete twin pairs (377 MZ and 367 DZ). Mean age was 22.5 ± 2.8 years; gender distribution was 59% women. Measurements: Sleep reactivity was measured using the Ford Insomnia Response to Stress Test (FIRST). The criterion for insomnia was having difficulty falling asleep, staying asleep, or nonrefreshing sleep “usually or always” for ≥ 1 month, with at least “somewhat” interference with daily functioning. Results: The prevalence of insomnia was 21%. Heritability estimates for sleep reactivity were 29% for females and 43% for males. The environmental variance for sleep reactivity was greater for females and entirely due to nonshared effects. Insomnia was 43% to 55% heritable for males and females, respectively; the sex difference was not significant. The genetic variances in insomnia and FIRST scores were correlated (r = 0.54 in females, r = 0.64 in males), as were the environmental variances (r = 0.32 in females, r = 0.37 in males). In terms of individual insomnia symptoms, difficulty staying asleep (25% to 35%) and nonrefreshing sleep (34% to 35%) showed relatively more genetic influences than difficulty falling asleep (0%). Conclusions: Sleep reactivity to stress has a substantial genetic component, as well as an environmental component. The finding that FIRST scores and insomnia symptoms share genetic influences is consistent with the hypothesis that sleep reactivity may be a genetic vulnerability for developing insomnia. Citation: Drake CL; Friedman NP; Wright KP; Roth T. Sleep reactivity and insomnia: genetic and environmental influences. SLEEP 2011;34(9):1179-1188. PMID:21886355
The validity of physical aggression in predicting adolescent academic performance.
Loveland, James M; Lounsbury, John W; Welsh, Deborah; Buboltz, Walter C
2007-03-01
Aggression has a long history in academic research as both a criterion and a predictor variable and it is well documented that aggression is related to a variety of poor academic outcomes such as: lowered academic performance, absenteeism and lower graduation rates. However, recent research has implicated physical aggression as being predictive of lower academic performance. The purpose of this study was to examine the role of the 'Big Five' personality traits of agreeableness, openness to experience, conscientiousness, neuroticism and extraversion and physical aggression in predicting the grade point averages (GPA) of adolescent students and to investigate whether or not there were differences in these relationships between male and female students. A sample of 992 students in grades 9 to 12 from a high school in south-eastern USA as part of a larger study examining the students' preparation for entry into the workforce. The study was correlational in nature: students completed a personality inventory developed by the second author with the GPA information supplied by the school. Results indicated that physical aggression accounts for 16% of variance in GPA and it adds 7% to the prediction of GPA beyond the Big Five. The Big Five traits added only 1.5% to the prediction of GPA after controlling for physical aggression. Interestingly, a significantly larger amount of variance in GPA was predicted by physical aggression for females than for males. Aggression accounts for significantly more variance in the GPA of females than for males, even when controlling for the Big Five personality factors. Future research should examine the differences in the expression of aggression in males and females, as well as how this is affecting interactions between peers and between students and their teachers.
Thompson, Angus H; Waye, Arianna
2018-06-01
Presenteeism (reduced productivity at work) is thought to be responsible for large economic costs. Nevertheless, much of the research supporting this is based on self-report questionnaires that have not been adequately evaluated. To examine the level of agreement among leading tests of presenteeism and to determine the inter-relationship of the two productivity subcategories, amount and quality, within the context of construct validity and method variance. Just under 500 health care workers from an urban health area were asked to complete a questionnaire containing the productivity items from eight presenteeism instruments. The analysis included an examination of test intercorrelations, separately for amount and quality, supplemented by principal-component analyses to determine whether either construct could be described by a single factor. A multitest, multiconstruct analysis was performed on the four tests that assessed both amount and quality to test for the relative contributions of construct and method variance. A total of 137 questionnaires were completed. Agreement among tests was positive, but modest. Pearson r ranges were 0 to 0.64 (mean = 0.32) for Amount and 0.03 to 0.38 (mean = 0.25) for Quality. Further analysis suggested that agreement was influenced more by method variance than by the productivity constructs the tests were designed to measure. The results suggest that presenteeism tests do not accurately assess work performance. Given their importance in the determination of policy-relevant conclusions, attention needs to be given to test improvement in the context of criterion validity assessment. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A
2011-09-01
The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010
A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.
Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan
2015-06-01
Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.
Geometric Structure of 3D Spinal Curves: Plane Regions and Connecting Zones
Berthonnaud, E.; Hilmi, R.; Dimnet, J.
2012-01-01
This paper presents a new study of the geometric structure of 3D spinal curves. The spine is considered as an heterogeneous beam, compound of vertebrae and intervertebral discs. The spine is modeled as a deformable wire along which vertebrae are beads rotating about the wire. 3D spinal curves are compound of plane regions connected together by zones of transition. The 3D spinal curve is uniquely flexed along the plane regions. The angular offsets between adjacent regions are concentrated at level of the middle zones of transition, so illustrating the heterogeneity of the spinal geometric structure. The plane regions along the 3D spinal curve must satisfy two criteria: (i) a criterion of minimum distance between the curve and the regional plane and (ii) a criterion controlling that the curve is continuously plane at the level of the region. The geometric structure of each 3D spinal curve is characterized by the sizes and orientations of regional planes, by the parameters representing flexed regions and by the sizes and functions of zones of transition. Spinal curves of asymptomatic subjects show three plane regions corresponding to spinal curvatures: lumbar, thoracic and cervical curvatures. In some scoliotic spines, four plane regions may be detected. PMID:25031873
Selection criteria utilized for hyperbaric oxygen treatment of carbon monoxide poisoning.
Hampson, N B; Dunford, R G; Kramer, C C; Norkool, D M
1995-01-01
Medical directors of North American hyperbaric oxygen (HBO) facilities were surveyed to assess selection criteria applied for treatment of acute carbon monoxide (CO) poisoning within the hyperbaric medicine community. Responses were received from 85% of the 208 facilities in the United States and Canada. Among responders, 89 monoplace and 58 multiplace chamber facilities treat acute CO poisoning, managing a total of 2,636 patients in 1992. A significant majority of facilities treat CO-exposed patients with coma (98%), transient loss of consciousness (LOC) (77%), ischemic changes on electrocardiogram (91%), focal neurologic deficits (94%), or abnormal psychometric testing (91%), regardless of carboxyhemoglobin (COHb) level. Although 92% would use HBO for a patient presenting with headache, nausea, and COHb 40%, only 62% of facilities utilize a specified minimum COHb level as the sole criterion for HBO therapy of an asymptomatic patient. When COHb is used as an independent criterion to determine HBO treatment, the level utilized varies widely between institutions. Half of responding facilities place limits on the delay to treatment for patients with only transient LOC. Time limits are applied less often in cases with persistent neurologic deficits. While variability exists, majority opinions can be derived for many patient selection criteria regarding the use of HBO in acute CO poisoning.
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.
Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.
The absolute threshold of cone vision
Koeing, Darran; Hofer, Heidi
2013-01-01
We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud
Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819
Kim, Sung-Su; Choi, Hyun-Jeung; Kim, Jin Ju; Kim, M Sun; Lee, In-Seon; Byun, Bohyun; Jia, Lina; Oh, Myung Ryurl; Moon, Youngho; Park, Sarah; Choi, Joon-Seok; Chae, Seoung Wan; Nam, Byung-Ho; Kim, Jin-Soo; Kim, Jihun; Min, Byung Soh; Lee, Jae Seok; Won, Jae-Kyung; Cho, Soo Youn; Choi, Yoon-La; Shin, Young Kee
2018-01-11
In clinical translational research and molecular in vitro diagnostics, a major challenge in the detection of genetic mutations is overcoming artefactual results caused by the low-quality of formalin-fixed paraffin-embedded tissue (FFPET)-derived DNA (FFPET-DNA). Here, we propose the use of an 'internal quality control (iQC) index' as a criterion for judging the minimum quality of DNA for PCR-based analyses. In a pre-clinical study comparing the results from droplet digital PCR-based EGFR mutation test (ddEGFR test) and qPCR-based EGFR mutation test (cobas EGFR test), iQC index ≥ 0.5 (iQC copies ≥ 500, using 3.3 ng of FFPET-DNA [1,000 genome equivalents]) was established, indicating that more than half of the input DNA was amplifiable. Using this criterion, we conducted a retrospective comparative clinical study of the ddEGFR and cobas EGFR tests for the detection of EGFR mutations in non-small cell lung cancer (NSCLC) FFPET-DNA samples. Compared with the cobas EGFR test, the ddEGFR test exhibited superior analytical performance and equivalent or higher clinical performance. Furthermore, iQC index is a reliable indicator of the quality of FFPET-DNA and could be used to prevent incorrect diagnoses arising from low-quality samples.
Effect of viscoplasticity on ignition sensitivity of an HMX based PBX
NASA Astrophysics Data System (ADS)
Hardin, D. Barrett; Zhou, Min
2017-01-01
The effect of viscoplastic deformation of the energetic component (HMX) on the mechanical, thermal, and ignition responses of a two-phase (HMX and Estane) PBX is analyzed. PBX microstructures are subjected to impact loading from a constant velocity piston traveling at a rate of 50 to 200 m/s. The analysis uses a 2D cohesive finite element framework, the focus of which is to evaluate the relative ignition sensitivity of the materials to determine the effect of the viscoplasticity of HMX on the responses. To delineate this effect, two sets of calculations are carried out; one set assumes the HMX grains are fully hyperelastic, and the other set assumes the HMX grains are elastic-viscoplastic. Results show that PBX specimens with elastic-viscoplastic HMX grains experience lower average and peak temperature rises, and as a result, show lower numbers of hotspots. An ignition criterion based on a criticality threshold obtained from chemical kinetics is used to quantify the ignition behavior of the materials. The criterion focuses on hotspot size and temperature to determine if a hotspot will undergo thermal runaway. It is found that the viscoplasticity of HMX increases the minimum load duration, mean load duration, threshold loading velocity, and total input energy required for ignition.
NASA Astrophysics Data System (ADS)
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.