A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Determination of the optimal level for combining area and yield estimates
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.
1981-01-01
Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.
McClure, Foster D; Lee, Jung K
2006-01-01
A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
Visualizing the Sample Standard Deviation
ERIC Educational Resources Information Center
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E
2013-04-01
Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
30 CFR 74.8 - Measurement, accuracy, and reliability requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...
Demonstration of the Gore Module for Passive Ground Water Sampling
2014-06-01
ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70
The Cost of Uncertain Life Span*
Edwards, Ryan D.
2012-01-01
A considerable amount of uncertainty surrounds the length of human life. The standard deviation in adult life span is about 15 years in the U.S., and theory and evidence suggest it is costly. I calibrate a utility-theoretic model of preferences over length of life and show that one fewer year in standard deviation is worth about half a mean life year. Differences in the standard deviation exacerbate cross-sectional differences in life expectancy between the U.S. and other industrialized countries, between rich and poor countries, and among poor countries. Accounting for the cost of life-span variance also appears to amplify recently discovered patterns of convergence in world average human well-being. This is partly for methodological reasons and partly because unconditional variance in human length of life, primarily the component due to infant mortality, has exhibited even more convergence than life expectancy. PMID:22368324
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
Analysis of Variance with Summary Statistics in Microsoft® Excel®
ERIC Educational Resources Information Center
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Procedures for estimating confidence intervals for selected method performance parameters.
McClure, F D; Lee, J K
2001-01-01
Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.
The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems
2006-03-01
test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
Non-specific filtering of beta-distributed data.
Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D
2014-06-19
Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.
First among Others? Cohen's "d" vs. Alternative Standardized Mean Group Difference Measures
ERIC Educational Resources Information Center
Cahan, Sorel; Gamliel, Eyal
2011-01-01
Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., [eta][superscript 2], f[superscript 2]) evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation--that is, measures of dispersion about the mean. In…
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Minding Impacting Events in a Model of Stochastic Variance
Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.
2011-01-01
We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864
The variance of length of stay and the optimal DRG outlier payments.
Felder, Stefan
2009-09-01
Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.
Does education confer a culture of healthy behavior? Smoking and drinking patterns in Danish twins.
Johnson, Wendy; Kyvik, Kirsten Ohm; Mortensen, Erik L; Skytthe, Axel; Batty, G David; Deary, Ian J
2011-01-01
More education is associated with healthier smoking and drinking behaviors. Most analyses of effects of education focus on mean levels. Few studies have compared variance in health-related behaviors at different levels of education or analyzed how education impacts underlying genetic and environmental sources of health-related behaviors. This study explored these influences. In a 2002 postal questionnaire, 21,522 members of the Danish Twin Registry, born during 1931-1982, reported smoking and drinking habits. The authors used quantitative genetic models to examine how these behaviors' genetic and environmental variances differed with level of education, adjusting for birth-year effects. As expected, more education was associated with less smoking, and average drinking levels were highest among the most educated. At 2 standard deviations above the mean educational level, variance in smoking and drinking was about one-third that among those at 2 standard deviations below, because fewer highly educated people reported high levels of smoking or drinking. Because shared environmental variance was particularly restricted, one explanation is that education created a culture that discouraged smoking and heavy drinking. Correlations between shared environmental influences on education and the health behaviors were substantial among the well-educated for smoking in both sexes and drinking in males, reinforcing this notion.
Automatic variance analysis of multistage care pathways.
Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T
2014-01-01
A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality.
Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.
ERIC Educational Resources Information Center
Sands, William A.
1978-01-01
Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)
Verster, Joris C; Roth, Thomas
2014-01-01
The on-the-road driving test in normal traffic is used to examine the impact of drugs on driving performance. This paper compares the sensitivity of standard deviation of lateral position (SDLP) and SD speed in detecting driving impairment. A literature search was conducted to identify studies applying the on-the-road driving test, examining the effects of anxiolytics, antidepressants, antihistamines, and hypnotics. The proportion of comparisons (treatment versus placebo) where a significant impairment was detected with SDLP and SD speed was compared. About 40% of 53 relevant papers did not report data on SD speed and/or SDLP. After placebo administration, the correlation between SDLP and SD speed was significant but did not explain much variance (r = 0.253, p = 0.0001). A significant correlation was found between ΔSDLP and ΔSD speed (treatment-placebo), explaining 48% of variance. When using SDLP as outcome measure, 67 significant treatment-placebo comparisons were found. Only 17 (25.4%) were significant when SD speed was used as outcome measure. Alternatively, for five treatment-placebo comparisons, a significant difference was found for SD speed but not for SDLP. Standard deviation of lateral position is a more sensitive outcome measure to detect driving impairment than speed variability.
NASA Astrophysics Data System (ADS)
Malanson, G. P.; DeRose, R. J.; Bekker, M. F.
2016-12-01
The consequences of increasing climatic variance while including variability among individuals and populations are explored for range margins of species with a spatially explicit simulation. The model has a single environmental gradient and a single species then extended to two species. Species response to the environment is a Gaussian function with a peak of 1.0 at their peak fitness on the gradient. The variance in the environment is taken from the total variance in the tree ring series of 399 individuals of Pinus edulis in FIA plots in the western USA. The variability is increased by a multiplier of the standard deviation for various doubling times. The variance of individuals in the simulation is drawn from these same series. Inheritance of individual variability is based on the geographic locations of the individuals. The variance for P. edulis is recomputed as time-dependent conditional standard deviations using the GARCH procedure. Establishment and mortality are simulated in a Monte Carlo process with individual variance. Variance for P. edulis does not show a consistent pattern of heteroscedasticity. An obvious result is that increasing variance has deleterious effects on species persistence because extreme events that result in extinctions cannot be balanced by positive anomalies, but even less extreme negative events cannot be balanced by positive anomalies because of biological and spatial constraints. In the two species model the superior competitor is more affected by increasing climatic variance because its response function is steeper at the point of intersection with the other species and so the uncompensated effects of negative anomalies are greater for it. These theoretical results can guide the anticipated need to mitigate the effects of increasing climatic variability on P. edulis range margins. The trailing edge, here subject to increasing drought stress with increasing temperatures, will be more affected by negative anomalies.
Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.
Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783
Buma, Brian; Costanza, Jennifer K; Riitters, Kurt
2017-11-21
The scale of investigation for disturbance-influenced processes plays a critical role in theoretical assumptions about stability, variance, and equilibrium, as well as conservation reserve and long-term monitoring program design. Critical consideration of scale is required for robust planning designs, especially when anticipating future disturbances whose exact locations are unknown. This research quantified disturbance proportion and pattern (as contagion) at multiple scales across North America. This pattern of scale-associated variability can guide selection of study and management extents, for example, to minimize variance (measured as standard deviation) between any landscapes within an ecoregion. We identified the proportion and pattern of forest disturbance (30 m grain size) across multiple landscape extents up to 180 km 2 . We explored the variance in proportion of disturbed area and the pattern of that disturbance between landscapes (within an ecoregion) as a function of the landscape extent. In many ecoregions, variance between landscapes within an ecoregion was minimal at broad landscape extents (low standard deviation). Gap-dominated regions showed the least variance, while fire-dominated showed the largest. Intensively managed ecoregions displayed unique patterns. A majority of the ecoregions showed low variance between landscapes at some scale, indicating an appropriate extent for incorporating natural regimes and unknown future disturbances was identified. The quantification of the scales of disturbance at the ecoregion level provides guidance for individuals interested in anticipating future disturbances which will occur in unknown spatial locations. Information on the extents required to incorporate disturbance patterns into planning is crucial for that process.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finley, C; Dave, J
Purpose: To characterize noise for image receptors of digital radiography systems based on pixel variance. Methods: Nine calibrated digital image receptors associated with nine new portable digital radiography systems (Carestream Health, Inc., Rochester, NY) were used in this study. For each image receptor, thirteen images were acquired with RQA5 beam conditions for input detector air kerma ranging from 0 to 110 µGy, and linearized ‘For Processing’ images were extracted. Mean pixel value (MPV), standard deviation (SD) and relative noise (SD/MPV) were obtained from each image using ROI sizes varying from 2.5×2.5 to 20×20 mm{sup 2}. Variance (SD{sup 2}) was plottedmore » as a function of input detector air kerma and the coefficients of the quadratic fit were used to derive structured, quantum and electronic noise coefficients. Relative noise was also fitted as a function of input detector air kerma to identify noise sources. The fitting functions used least-squares approach. Results: The coefficient of variation values obtained using different ROI sizes was less than 1% for all the images. The structured, quantum and electronic coefficients obtained from the quadratic fit of variance (r>0.97) were 0.43±0.10, 3.95±0.27 and 2.89±0.74 (mean ± standard deviation), respectively, indicating that overall the quantum noise was the dominant noise source. However, for one system electronic noise coefficient (3.91) was greater than quantum noise coefficient (3.56) indicating electronic noise to be dominant. Using relative noise values, the power parameter of the fitting equation (|r|>0.93) showed a mean and standard deviation of 0.46±0.02. A 0.50 value for this power parameter indicates quantum noise to be the dominant noise source whereas values around 0.50 indicate presence of other noise sources. Conclusion: Characterizing noise from pixel variance assists in identifying contributions from various noise sources that, eventually, may affect image quality. This approach may be integrated during periodic quality assessments of digital image receptors.« less
Hinton-Bayre, Anton D
2011-02-01
There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.
Odor measurements according to EN 13725: A statistical analysis of variance components
NASA Astrophysics Data System (ADS)
Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko
2014-04-01
In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore suggested that the repeatability limit (r), as laid down in EN 13725, can be reduced from r ≤ 0.477 to r ≤ 0.31.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
A population-based job exposure matrix for power-frequency magnetic fields.
Bowman, Joseph D; Touchstone, Jennifer A; Yost, Michael G
2007-09-01
A population-based job exposure matrix (JEM) was developed to assess personal exposures to power-frequency magnetic fields (MF) for epidemiologic studies. The JEM compiled 2,317 MF measurements taken on or near workers by 10 studies in the United States, Sweden, New Zealand, Finland, and Italy. A database was assembled from the original data for six studies plus summary statistics grouped by occupation from four other published studies. The job descriptions were coded into the 1980 Standard Occupational Classification system (SOC) and then translated to the 1980 job categories of the U.S. Bureau of the Census (BOC). For each job category, the JEM database calculated the arithmetic mean, standard deviation, geometric mean, and geometric standard deviation of the workday-average MF magnitude from the combined data. Analysis of variance demonstrated that the combining of MF data from the different sources was justified, and that the homogeneity of MF exposures in the SOC occupations was comparable to JEMs for solvents and particulates. BOC occupation accounted for 30% of the MF variance (p < 10(-6)), and the contrast (ratio of the between-job variance to the total of within- and between-job variances) was 88%. Jobs lacking data had their exposures inferred from measurements on similar occupations. The JEM provided MF exposures for 97% of the person-months in a population-based case-control study and 95% of the jobs on death certificates in a registry study covering 22 states. Therefore, we expect this JEM to be useful in other population-based epidemiologic studies.
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.
Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D
2017-12-01
In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2005-01-01
The authors argue that a robust version of Cohen's effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohen's effect size. The authors investigated coverage probability for…
Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2018-01-01
Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. Results The AL methods produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p = 0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275 to 0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers’ different models during the training phase, compared to the variance of the induced models’ AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods. The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p = 0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p = 0.29), as was the difference between the Combination_XA and Exploitation methods (p = 0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p = 0.014), but not when using any of the three AL methods. Conclusions The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group’s individual labelers. Finally, using the AL methods when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. PMID:28456512
Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2017-09-01
Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. The AL methods: produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p=0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275-0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers' different models during the training phase, compared to the variance of the induced models' AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p=0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p=0.29), as was the difference between the Combination_XA and Exploitation methods (p=0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p=0.014), but not when using any of the three AL methods. The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group's individual labelers. Finally, using the AL methods: when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. Copyright © 2017 Elsevier B.V. All rights reserved.
Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A
2013-09-01
Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also contributed substantially to micro-environmental sensitivity. Addition of random regressions to the mean model did not reduce heterogeneity in residual variance and that genetic heterogeneity of residual variance was not simply an effect of an incomplete mean model. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales
ERIC Educational Resources Information Center
Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.
2009-01-01
The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…
Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."
Flexner 3.0—Democratization of Medical Knowledge for the 21st Century
Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Bian, L. G.; Chen, Z. G.; Sparrow, M.; Zhang, J. H.
2006-05-01
This paper describes the application of the variance method for flux estimation over a mixed agricultural region in China. Eddy covariance and flux variance measurements were conducted in a near-surface layer over a non-uniform land surface in the central plain of China from 7 June to 20 July 2002. During this period, the mean canopy height was about 0.50 m. The study site consisted of grass (10% of area), beans (15%), corn (15%) and rice (60%). Under unstable conditions, the standard deviations of temperature and water vapor density (normalized by appropriate scaling parameters), observed by a single instrument, followed the Monin-Obukhov similarity theory. The similarity constants for heat (C-T) and water vapor (C-q) were 1.09 and 1.49, respectively. In comparison with direct measurements using eddy covariance techniques, the flux variance method, on average, underestimated sensible heat flux by 21% and latent heat flux by 24%, which may be attributed to the fact that the observed slight deviations (20% or 30% at most) of the similarity "constants" may be within the expected range of variation of a single instrument from the generally-valid relations.
Multiscale analysis of the CMB temperature derivatives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcos-Caballero, A.; Martínez-González, E.; Vielva, P., E-mail: marcos@ifca.unican.es, E-mail: martinez@ifca.unican.es, E-mail: vielva@ifca.unican.es
2017-02-01
We study the Planck CMB temperature at different scales through its derivatives up to second order, which allows one to characterize the local shape and isotropy of the field. The problem of having an incomplete sky in the calculation and statistical characterization of the derivatives is addressed in the paper. The analysis confirms the existence of a low variance in the CMB at large scales, which is also noticeable in the derivatives. Moreover, deviations from the standard model in the gradient, curvature and the eccentricity tensor are studied in terms of extreme values on the data. As it is expected,more » the Cold Spot is detected as one of the most prominent peaks in terms of curvature, but additionally, when the information of the temperature and its Laplacian are combined, another feature with similar probability at the scale of 10{sup o} is also observed. However, the p -value of these two deviations increase above the 6% when they are referred to the variance calculated from the theoretical fiducial model, indicating that these deviations can be associated to the low variance anomaly. Finally, an estimator of the directional anisotropy for spinorial quantities is introduced, which is applied to the spinors derived from the field derivatives. An anisotropic direction whose probability is <1% is detected in the eccentricity tensor.« less
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Useful Effect Size Interpretations for Single Case Research
ERIC Educational Resources Information Center
Parker, Richard I.; Hagan-Burke, Shanna
2007-01-01
An obstacle to broader acceptability of effect sizes in single case research is their lack of intuitive and useful interpretations. Interpreting Cohen's d as "standard deviation units difference" and R[superscript 2] as "percent of variance accounted for" do not resound with most visual analysts. In fact, the only comparative analysis widely…
2014-03-27
42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM
Family losses following truncation selection in populations of half-sib families
J. H. Roberds; G. Namkoong; H. Kang
1980-01-01
Family losses during truncation selection may be sizable in populations of half-sib families. Substantial losses may occur even in populations containing little or no variation among families. Heavier losses will occur, however, under conditions of high heritability where there is considerable family variation. Standard deviations and therefore variances of family loss...
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
McClure, Foster D; Lee, Jung K
2012-01-01
The validation process for an analytical method usually employs an interlaboratory study conducted as a balanced completely randomized model involving a specified number of randomly chosen laboratories, each analyzing a specified number of randomly allocated replicates. For such studies, formulas to obtain approximate unbiased estimates of the variance and uncertainty of the sample laboratory-to-laboratory (lab-to-lab) STD (S(L)) have been developed primarily to account for the uncertainty of S(L) when there is a need to develop an uncertainty budget that includes the uncertainty of S(L). For the sake of completeness on this topic, formulas to estimate the variance and uncertainty of the sample lab-to-lab variance (S(L)2) were also developed. In some cases, it was necessary to derive the formulas based on an approximate distribution for S(L)2.
Putative golden proportions as predictors of facial esthetics in adolescents.
Kiekens, Rosemie M A; Kuijpers-Jagtman, Anne Marie; van 't Hof, Martin A; van 't Hof, Bep E; Maltha, Jaap C
2008-10-01
In orthodontics, facial esthetics is assumed to be related to golden proportions apparent in the ideal human face. The aim of the study was to analyze the putative relationship between facial esthetics and golden proportions in white adolescents. Seventy-six adult laypeople evaluated sets of photographs of 64 adolescents on a visual analog scale (VAS) from 0 to 100. The facial esthetic value of each subject was calculated as a mean VAS score. Three observers recorded the position of 13 facial landmarks included in 19 putative golden proportions, based on the golden proportions as defined by Ricketts. The proportions and each proportion's deviation from the golden target (1.618) were calculated. This deviation was then related to the VAS scores. Only 4 of the 19 proportions had a significant negative correlation with the VAS scores, indicating that beautiful faces showed less deviation from the golden standard than less beautiful faces. Together, these variables explained only 16% of the variance. Few golden proportions have a significant relationship with facial esthetics in adolescents. The explained variance of these variables is too small to be of clinical importance.
NASA Technical Reports Server (NTRS)
Wu, Andy
1995-01-01
Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.
Ontogeny of morningness-eveningness across the adult human lifespan
NASA Astrophysics Data System (ADS)
Randler, Christoph
2016-02-01
Sleep timing of humans can be classified alongside a continuum from early to late sleepers, with some people (larks) having an early activity, early bed, and rise times and others (owls) with a more nocturnally orientated activity. Only a few studies reported that morningness-eveningness changes significantly during the adult lifespan based on community samples. Here, I applied a different methodological approach to seek for evidence for the age-related changes in morningness-eveningness preferences by using a meta-data from all available studies. The new aspect of this cross-sectional approach is that only a few studies themselves address the age-related changes of the adult lifespan development, but that many studies are available that provide exactly the data needed. The studies came from 27 countries and included 36,939 participants. Age was highly significantly correlated with scores on the Composite Scale of Morningness ( r = 0.70). This relationship seems linear, because a linear regression explained nearly the same amount of variance compared to other models such as logarithmic, quadratic, or cubic models. The standard deviation of age correlated with the standard deviation of CSM scores ( r = 0.55), suggesting when there is much variance in age in a study; in turn, there is much variance in morningness. This meta-analytical approach shows that morningness-eveningness changes across the adult lifespan and that older age is related to higher morningness.
Associations between heterozygosity and growth rate variables in three western forest trees
Jeffry B. Milton; Peggy Knowles; Kareen B. Sturgeon; Yan B. Linhart; Martha Davis
1981-01-01
For each of three species, quaking aspen, ponderosa pine, and lodgepole pine, we determined the relationships between a ranking of heterozygosity of individuals and measures of growth rate. Genetic variation was assayed by starch gel electrophoresis of enzymes. Growth rates were characterized by the mean, standard deviation, logarithm of the variance, and coefficient...
Child and Informant Influences on Behavioral Ratings of Preschool Children
ERIC Educational Resources Information Center
Phillips, Beth M.; Lonigan, Christopher J.
2010-01-01
This study investigated relationships among teacher, parent, and observer behavioral ratings of 3- and 4-year-old children using intra-class correlations and analysis of variance. Comparisons within and across children from middle-income (MI; N = 166; mean age = 54.25 months, standard deviation [SD] = 8.74) and low-income (LI; N = 199; mean age =…
Liebert, Adam; Wabnitz, Heidrun; Elster, Clemens
2012-05-01
Time-resolved near-infrared spectroscopy allows for depth-selective determination of absorption changes in the adult human head that facilitates separation between cerebral and extra-cerebral responses to brain activation. The aim of the present work is to analyze which combinations of moments of measured distributions of times of flight (DTOF) of photons and source-detector separations are optimal for the reconstruction of absorption changes in a two-layered tissue model corresponding to extra- and intra-cerebral compartments. To this end we calculated the standard deviations of the derived absorption changes in both layers by considering photon noise and a linear relation between the absorption changes and the DTOF moments. The results show that the standard deviation of the absorption change in the deeper (superficial) layer increases (decreases) with the thickness of the superficial layer. It is confirmed that for the deeper layer the use of higher moments, in particular the variance of the DTOF, leads to an improvement. For example, when measurements at four different source-detector separations between 8 and 35 mm are available and a realistic thickness of the upper layer of 12 mm is assumed, the inclusion of the change in mean time of flight, in addition to the change in attenuation, leads to a reduction of the standard deviation of the absorption change in the deeper tissue layer by a factor of 2.5. A reduction by another 4% can be achieved by additionally including the change in variance.
Nakling, Jakob; Buhaug, Harald; Backe, Bjorn
2005-10-01
In a large unselected population of normal spontaneous pregnancies, to estimate the biologic variation of the interval from the first day of the last menstrual period to start of pregnancy, and the biologic variation of gestational length to delivery; and to estimate the random error of routine ultrasound assessment of gestational age in mid-second trimester. Cohort study of 11,238 singleton pregnancies, with spontaneous onset of labour and reliable last menstrual period. The day of delivery was predicted with two independent methods: According to the rule of Nägele and based on ultrasound examination in gestational weeks 17-19. For both methods, the mean difference between observed and predicted day of delivery was calculated. The variances of the differences were combined to estimate the variances of the two partitions of pregnancy. The biologic variation of the time from last menstrual period to pregnancy start was estimated to 7.0 days (standard deviation), and the standard deviation of the time to spontaneous delivery was estimated to 12.4 days. The estimate of the standard deviation of the random error of ultrasound assessed foetal age was 5.2 days. Even when the last menstrual period is reliable, the biologic variation of the time from last menstrual period to the real start of pregnancy is substantial, and must be taken into account. Reliable information about the first day of the last menstrual period is not equivalent with reliable information about the start of pregnancy.
Tracked ultrasound calibration studies with a phantom made of LEGO bricks
NASA Astrophysics Data System (ADS)
Soehl, Marie; Walsh, Ryan; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor
2014-03-01
In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.
Windowed and Wavelet Analysis of Marine Stratocumulus Cloud Inhomogeneity
NASA Technical Reports Server (NTRS)
Gollmer, Steven M.; Harshvardhan; Cahalan, Robert F.; Snider, Jack B.
1995-01-01
To improve radiative transfer calculations for inhomogeneous clouds, a consistent means of modeling inhomogeneity is needed. One current method of modeling cloud inhomogeneity is through the use of fractal parameters. This method is based on the supposition that cloud inhomogeneity over a large range of scales is related. An analysis technique named wavelet analysis provides a means of studying the multiscale nature of cloud inhomogeneity. In this paper, the authors discuss the analysis and modeling of cloud inhomogeneity through the use of wavelet analysis. Wavelet analysis as well as other windowed analysis techniques are used to study liquid water path (LWP) measurements obtained during the marine stratocumulus phase of the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment. Statistics obtained using analysis windows, which are translated to span the LWP dataset, are used to study the local (small scale) properties of the cloud field as well as their time dependence. The LWP data are transformed onto an orthogonal wavelet basis that represents the data as a number of times series. Each of these time series lies within a frequency band and has a mean frequency that is half the frequency of the previous band. Wavelet analysis combined with translated analysis windows reveals that the local standard deviation of each frequency band is correlated with the local standard deviation of the other frequency bands. The ratio between the standard deviation of adjacent frequency bands is 0.9 and remains constant with respect to time. This ratio defined as the variance coupling parameter is applicable to all of the frequency bands studied and appears to be related to the slope of the data's power spectrum. Similar analyses are performed on two cloud inhomogeneity models, which use fractal-based concepts to introduce inhomogeneity into a uniform cloud field. The bounded cascade model does this by iteratively redistributing LWP at each scale using the value of the local mean. This model is reformulated into a wavelet multiresolution framework, thereby presenting a number of variants of the bounded cascade model. One variant introduced in this paper is the 'variance coupled model,' which redistributes LWP using the local standard deviation and the variance coupling parameter. While the bounded cascade model provides an elegant two- parameter model for generating cloud inhomogeneity, the multiresolution framework provides more flexibility at the expense of model complexity. Comparisons are made with the results from the LWP data analysis to demonstrate both the strengths and weaknesses of these models.
Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco
2002-01-01
The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...
Pardo, Deborah; Jenouvrier, Stéphanie; Weimerskirch, Henri; Barbraud, Christophe
2017-06-19
Climate changes include concurrent changes in environmental mean, variance and extremes, and it is challenging to understand their respective impact on wild populations, especially when contrasted age-dependent responses to climate occur. We assessed how changes in mean and standard deviation of sea surface temperature (SST), frequency and magnitude of warm SST extreme climatic events (ECE) influenced the stochastic population growth rate log( λ s ) and age structure of a black-browed albatross population. For changes in SST around historical levels observed since 1982, changes in standard deviation had a larger (threefold) and negative impact on log( λ s ) compared to changes in mean. By contrast, the mean had a positive impact on log( λ s ). The historical SST mean was lower than the optimal SST value for which log( λ s ) was maximized. Thus, a larger environmental mean increased the occurrence of SST close to this optimum that buffered the negative effect of ECE. This 'climate safety margin' (i.e. difference between optimal and historical climatic conditions) and the specific shape of the population growth rate response to climate for a species determine how ECE affect the population. For a wider range in SST, both the mean and standard deviation had negative impact on log( λ s ), with changes in the mean having a greater effect than the standard deviation. Furthermore, around SST historical levels increases in either mean or standard deviation of the SST distribution led to a younger population, with potentially important conservation implications for black-browed albatrosses.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).
Improving IQ measurement in intellectual disabilities using true deviation from population norms
2014-01-01
Background Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. Methods We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. Results We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Conclusion Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment. PMID:26491488
Improving IQ measurement in intellectual disabilities using true deviation from population norms.
Sansone, Stephanie M; Schneider, Andrea; Bickel, Erika; Berry-Kravis, Elizabeth; Prescott, Christina; Hessl, David
2014-01-01
Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment.
Abraha, Iosief; Cherubini, Antonio; Cozzolino, Francesco; De Florio, Rita; Luchetta, Maria Laura; Rimland, Joseph M; Folletti, Ilenia; Marchesi, Mauro; Germani, Antonella; Orso, Massimiliano; Eusebi, Paolo; Montedori, Alessandro
2015-05-27
To examine whether deviation from the standard intention to treat analysis has an influence on treatment effect estimates of randomised trials. Meta-epidemiological study. Medline, via PubMed, searched between 2006 and 2010; 43 systematic reviews of interventions and 310 randomised trials were included. From each year searched, random selection of 5% of intervention reviews with a meta-analysis that included at least one trial that deviated from the standard intention to treat approach. Basic characteristics of the systematic reviews and randomised trials were extracted. Information on the reporting of intention to treat analysis, outcome data, risk of bias items, post-randomisation exclusions, and funding were extracted from each trial. Trials were classified as: ITT (reporting the standard intention to treat approach), mITT (reporting a deviation from the standard approach), and no ITT (reporting no approach). Within each meta-analysis, treatment effects were compared between mITT and ITT trials, and between mITT and no ITT trials. The ratio of odds ratios was calculated (value <1 indicated larger treatment effects in mITT trials than in other trial categories). 50 meta-analyses and 322 comparisons of randomised trials (from 84 ITT trials, 118 mITT trials, and 108 no ITT trials; 12 trials contributed twice to the analysis) were examined. Compared with ITT trials, mITT trials showed a larger intervention effect (pooled ratio of odds ratios 0.83 (95% confidence interval 0.71 to 0.96), P=0.01; between meta-analyses variance τ(2)=0.13). Adjustments for sample size, type of centre, funding, items of risk of bias, post-randomisation exclusions, and variance of log odds ratio yielded consistent results (0.80 (0.69 to 0.94), P=0.005; τ(2)=0.08). After exclusion of five influential studies, results remained consistent (0.85 (0.75 to 0.98); τ(2)=0.08). The comparison between mITT trials and no ITT trials showed no statistical difference between the two groups (adjusted ratio of odds ratios 0.92 (0.70 to 1.23); τ(2)=0.57). Trials that deviated from the intention to treat analysis showed larger intervention effects than trials that reported the standard approach. Where an intention to treat analysis is impossible to perform, authors should clearly report who is included in the analysis and attempt to perform multiple imputations. © Abraha et al 2015.
Phonological Awareness and Print Knowledge of Preschool Children with Cochlear Implants
Ambrose, Sophie E.; Fey, Marc E.; Eisenberg, Laurie S.
2012-01-01
Purpose To determine whether preschool-age children with cochlear implants have age-appropriate phonological awareness and print knowledge and to examine the relationships of these skills with related speech and language abilities. Method 24 children with cochlear implants (CIs) and 23 peers with normal hearing (NH), ages 36 to 60 months, participated. Children’s print knowledge, phonological awareness, language, speech production, and speech perception abilities were assessed. Results For phonological awareness, the CI group’s mean score fell within 1 standard deviation of the TOPEL’s normative sample mean but was more than 1 standard deviation below our NH group mean. The CI group’s performance did not differ significantly from that of the NH group for print knowledge. For the CI group, phonological awareness and print knowledge were significantly correlated with language, speech production, and speech perception. Together, these predictor variables accounted for 34% of variance in the CI group’s phonological awareness but no significant variance in their print knowledge. Conclusions Children with CIs have the potential to develop age-appropriate early literacy skills by preschool-age but are likely to lag behind their NH peers in phonological awareness. Intervention programs serving these children should target these skills with instruction and by facilitating speech and language development. PMID:22223887
Who's biased? A meta-analysis of buyer-seller differences in the pricing of lotteries.
Yechiam, Eldad; Ashby, Nathaniel J S; Pachur, Thorsten
2017-05-01
A large body of empirical research has examined the impact of trading perspective on pricing of consumer products, with the typical finding being that selling prices exceed buying prices (i.e., the endowment effect). Using a meta-analytic approach, we examine to what extent the endowment effect also emerges in the pricing of monetary lotteries. As monetary lotteries have a clearly defined normative value, we also assess whether one trading perspective is more biased than the other. We consider several indicators of bias: absolute deviation from expected values, rank correlation with expected values, overall variance, and per-unit variance. The meta-analysis, which includes 35 articles, indicates that selling prices considerably exceed buying prices (Cohen's d = 0.58). Importantly, we also find that selling prices deviate less from the lotteries' expected values than buying prices, both in absolute and in relative terms. Selling prices also exhibit lower variance per unit. Hierarchical Bayesian modeling with cumulative prospect theory indicates that buyers have lower probability sensitivity and a more pronounced response bias. The finding that selling prices are more in line with normative standards than buying prices challenges the prominent account whereby sellers' valuations are upward biased due to loss aversion, and supports alternative theoretical accounts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Mavilio, Alberto; Sisto, Dario; Ferreri, Paolo; Cardascia, Nicola; Alessio, Giovanni
2017-01-01
A significant variability of the second harmonic (2ndH) phase of steady-state pattern electroretinogram (SS-PERG) in intrasession retest has been recently described in glaucoma patients (GP), which has not been found in healthy subjects. To evaluate the reliability of phase variability in retest (a procedure called RE-PERG or REPERG) in the presence of cataract, which is known to affect standard PERG, we tested this procedure in GP, normal controls (NC), and cataract patients (CP). The procedure was performed on 50 GP, 35 NC, and 27 CP. All subjects were examined with RE-PERG and SS-PERG and also with spectral domain optical coherence tomography and standard automated perimetry. Standard deviation of phase and amplitude value of 2ndH were correlated by means of one-way analysis of variance and Pearson correlation, with the mean deviation and pattern standard deviation assessed by standard automated perimetry and retinal nerve fiber layer and the ganglion cell complex thickness assessed by spectral domain optical coherence tomography. Receiver operating characteristics were calculated in cohort populations with and without cataract. Standard deviation of phase of 2ndH was significantly higher in GP with respect to NC ( P <0.001) and CP ( P <0.001), and it correlated with retinal nerve fiber layer ( r =-0.5, P <0.001) and ganglion cell complex ( r =-0.6, P <0.001) defects in GP. Receiver operating characteristic evaluation showed higher specificity of RE-PERG (86.4%; area under the curve 0.93) with respect to SS-PERG (54.5%; area under the curve 0.68) in CP. RE-PERG may improve the specificity of SS-PERG in clinical practice in the discrimination of GP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Öztürk, Hande; Noyan, I. Cevdet
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
Öztürk, Hande; Noyan, I. Cevdet
2017-08-24
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
Standard deviations of composition measurements in atom probe analyses-Part II: 3D atom probe.
Danoix, F; Grancher, G; Bostel, A; Blavette, D
2007-09-01
In a companion paper [F. Danoix, G. Grancher, A. Bostel, D. Blavette, Surf. Interface Anal. this issue (previous paper).], the derivation of variances of the estimates of measured composition, and the underlying hypotheses, have been revisited in the the case of conventional one dimensional (1D) atom probes. In this second paper, we will concentrate on the analytical derivation of the variance when the estimate of composition is obtained from a 3D atom probe. As will be discussed, when the position information is available, compositions can be derived either from constant number of atoms, or from constant volume, blocks. The analytical treatment in the first case is identical to the one developed for conventional 1D instruments, and will not be discussed further in this paper. Conversely, in the second case, the analytical treatment is different, as well as the formula of the variance. In particular, it will be shown that the detection efficiency plays an important role in the determination of the variance.
NASA Technical Reports Server (NTRS)
Woodcock, C. E.; Strahler, A. H.
1984-01-01
Digital images derived by scanning air photos and through acquiring aircraft and spcecraft scanner data were studied. Results show that spatial structure in scenes can be measured and logically related to texture and image variance. Imagery data were used of a South Dakota forest; a housing development in Canoga Park, California; an agricltural area in Mississppi, Louisiana, Kentucky, and Tennessee; the city of Washington, D.C.; and the Klamath National Forest. Local variance, measured as the average standard deviation of brightness values within a three-by-three moving window, reaches a peak at a resolution cell size about two-thirds to three-fourths the size of the objects within the scene. If objects are smaller than the resolution cell size of the image, this peak does not occur and local variance simply decreases with increasing resolution as spatial averaging occurs. Variograms can also reveal the size, shape, and density of objects in the scene.
On the internal target model in a tracking task
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Baron, S.
1981-01-01
An optimal control model for predicting operator's dynamic responses and errors in target tracking ability is summarized. The model, which predicts asymmetry in the tracking data, is dependent on target maneuvers and trajectories. Gunners perception, decision making, control, and estimate of target positions and velocity related to crossover intervals are discussed. The model provides estimates for means, standard deviations, and variances for variables investigated and for operator estimates of future target positions and velocities.
1980-12-01
to sound pressure level in decibels assuming a fre- quency of 1000 Hz. 249 The perceived noisiness values are derived from a formula specified in...Analyses .......... 244 6.i.16 Perceived Noise Level Analysis .............249 6.1.17 Acoustic Weighting Networks ................250 6.2 DERIVATIONS...BAND ANALYSIS BASIC STATISTICAL ANALYSES: *OCTAVE ANALYSIS MEAN *THIRD OCTAVE ANALYSIS VARIANCE *PERCEIVED NOISE LEVEL STANDARD DEVIATION CALCULATION
Zhu, Yuying; Wang, Jianmin; Wang, Cunfang
2018-05-01
Taking fresh goat milk as raw material after filtering, centrifuging, hollow fiber ultrafiltration, allocating formula, value detection and preparation processing, a set of 10 goat milk mixed standard substances was prepared on the basis of one-factor-at-a-time using a uniform design method, and its accuracy, uniformity and stability were evaluated by paired t-test and F-test of one-way analysis of variance. The results showed that three milk composition contents of these standard products were independent of each other, and the preparation using the quasi-level design method, and without emulsifier was the best program. Compared with detection value by cow milk standards for calibration fast analyzer, the calibration by goat milk mixed standard was more applicable to rapid detection of goat milk composition, detection value was more accurate and the deviation showed less error. Single factor analysis of variance showed that the uniformity and stability of the mixed standard substance were better; it could be stored for 15 days at 4°C. The uniformity and stability of the in-units and inter-units could meet the requirements of the preparation of national standard products. © 2018 Japanese Society of Animal Science.
Maeder, Angela B; Vonderheid, Susan C; Park, Chang G; Bell, Aleeca F; McFarlin, Barbara L; Vincent, Catherine; Carter, C Sue
To evaluate whether oxytocin titration for postdates labor induction differs among women who are normal weight, overweight, and obese and whether length of labor and birth method differ by oxytocin titration and body mass index (BMI). Retrospective cohort study. U.S. university-affiliated hospital. Of 280 eligible women, 21 were normal weight, 134 were overweight, and 125 were obese at labor admission. Data on women who received oxytocin for postdates induction between January 1, 2013 and June 30, 2013 were extracted from medical records. Oxytocin administration and labor outcomes were compared across BMI groups, controlling for potential confounders. Data were analyzed using χ 2 , analysis of variance, analysis of covariance, and multiple linear and logistic regression models. Women who were obese received more oxytocin than women who were overweight in the unadjusted analysis of variance (7.50 units compared with 5.92 units, p = .031). Women who were overweight had more minutes between rate changes from initiation to maximum than women who were obese (98.19 minutes compared with 83.39 minutes, p = .038). Length of labor increased with BMI (p = .018), with a mean length of labor for the normal weight group of 13.96 hours (standard deviation = 8.10); for the overweight group, 16.00 hours (standard deviation = 7.54); and for the obese group, 18.30 hours (standard deviation = 8.65). Cesarean rate increased with BMI (p = .001), with 4.8% of normal weight, 33.6% of overweight, and 42.4% of obese women having cesarean births. Women who were obese and experienced postdates labor induction received more oxytocin than women who were non-obese and had longer length of labor and greater cesarean rates. Copyright © 2017 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
Analysis of variances of quasirapidities in collisions of gold nuclei with track-emulsion nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulamov, K. G.; Zhokhova, S. I.; Lugovoi, V. V., E-mail: lugovoi@uzsci.net
2012-08-15
A new method of an analysis of variances was developed for studying n-particle correlations of quasirapidities in nucleus-nucleus collisions for a large constant number n of particles. Formulas that generalize the results of the respective analysis to various values of n were derived. Calculations on the basis of simple models indicate that the method is applicable, at least for n {>=} 100. Quasirapidity correlations statistically significant at a level of 36 standard deviations were discovered in collisions between gold nuclei and track-emulsion nuclei at an energy of 10.6 GeV per nucleon. The experimental data obtained in our present study aremore » contrasted against the theory of nucleus-nucleus collisions.« less
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
Huh, S.; Dickey, D.A.; Meador, M.R.; Ruhl, K.E.
2005-01-01
A temporal analysis of the number and duration of exceedences of high- and low-flow thresholds was conducted to determine the number of years required to detect a level shift using data from Virginia, North Carolina, and South Carolina. Two methods were used - ordinary least squares assuming a known error variance and generalized least squares without a known error variance. Using ordinary least squares, the mean number of years required to detect a one standard deviation level shift in measures of low-flow variability was 57.2 (28.6 on either side of the break), compared to 40.0 years for measures of high-flow variability. These means become 57.6 and 41.6 when generalized least squares is used. No significant relations between years and elevation or drainage area were detected (P>0.05). Cluster analysis did not suggest geographic patterns in years related to physiography or major hydrologic regions. Referring to the number of observations required to detect a one standard deviation shift as 'characterizing' the variability, it appears that at least 20 years of record on either side of a shift may be necessary to adequately characterize high-flow variability. A longer streamflow record (about 30 years on either side) may be required to characterize low-flow variability. ?? 2005 Elsevier B.V. All rights reserved.
Impact of NICU design on environmental noise.
Szymczak, Stacy E; Shellhaas, Renée A
2014-04-01
For neonates requiring intensive care, the optimal sound environment is uncertain. Minimal disruptions from medical staff create quieter environments for sleep, but limit language exposure necessary for proper language development. There are two models of neonatal intensive care units (NICUs): open-bay, in which 6-to-10 infants are cared for in a single large room; and single-room, in which neonates are housed in private, individual hospital rooms. We compared the acoustic environments in the two NICU models. We extracted the audio tracks from video-electroencephalography (EEG) monitoring studies from neonates in an open-bay NICU and compared the acoustic environment to that recorded from neonates in a new single-room NICU. From each NICU, 18 term infants were studied (total N=36; mean gestational age 39.3±1.9 weeks). Neither z-scores of the sound level variance (0.088±0.03 vs. 0.083±0.03, p=0.7), nor percent time with peak sound variance (above 2 standard deviations; 3.6% vs. 3.8%, p=0.6) were different. However, time below 0.05 standard deviations was higher in the single-room NICU (76% vs. 70%, p=0.02). We provide objective evidence that single-room NICUs have equal sound peaks and overall noise level variability compared with open-bay units, but the former may offer significantly more time at lower noise levels.
Throckmorton, Thomas W; Gulotta, Lawrence V; Bonnarens, Frank O; Wright, Stephen A; Hartzell, Jeffrey L; Rozzi, William B; Hurst, Jason M; Frostick, Simon P; Sperling, John W
2015-06-01
The purpose of this study was to compare the accuracy of patient-specific guides for total shoulder arthroplasty (TSA) with traditional instrumentation in arthritic cadaver shoulders. We hypothesized that the patient-specific guides would place components more accurately than standard instrumentation. Seventy cadaver shoulders with radiographically confirmed arthritis were randomized in equal groups to 5 surgeons of varying experience levels who were not involved in development of the patient-specific guidance system. Specimens were then randomized to patient-specific guides based off of computed tomography scanning, standard instrumentation, and anatomic TSA or reverse TSA. Variances in version or inclination of more than 10° and more than 4 mm in starting point were considered indications of significant component malposition. TSA glenoid components placed with patient-specific guides averaged 5° of deviation from the intended position in version and 3° in inclination; those with standard instrumentation averaged 8° of deviation in version and 7° in inclination. These differences were significant for version (P = .04) and inclination (P = .01). Multivariate analysis of variance to compare the overall accuracy for the entire cohort (TSA and reverse TSA) revealed patient-specific guides to be significantly more accurate (P = .01) for the combined vectors of version and inclination. Patient-specific guides also had fewer instances of significant component malposition than standard instrumentation did. Patient-specific targeting guides were more accurate than traditional instrumentation and had fewer instances of component malposition for glenoid component placement in this multi-surgeon cadaver study of arthritic shoulders. Long-term clinical studies are needed to determine if these improvements produce improved functional outcomes. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Variance of transionospheric VLF wave power absorption
NASA Astrophysics Data System (ADS)
Tao, X.; Bortnik, J.; Friedrich, M.
2010-07-01
To investigate the effects of D-region electron-density variance on wave power absorption, we calculate the power reduction of very low frequency (VLF) waves propagating through the ionosphere with a full wave method using the standard ionospheric model IRI and in situ observational data. We first verify the classic absorption curves of Helliwell's using our full wave code. Then we show that the IRI model gives overall smaller wave absorption compared with Helliwell's. Using D-region electron densities measured by rockets during the past 60 years, we demonstrate that the power absorption of VLF waves is subject to large variance, even though Helliwell's absorption curves are within ±1 standard deviation of absorption values calculated from data. Finally, we use a subset of the rocket data that are more representative of the D region of middle- and low-latitude VLF wave transmitters and show that the average quiet time wave absorption is smaller than that of Helliwell's by up to 100 dB at 20 kHz and 60 dB at 2 kHz, which would make the model-observation discrepancy shown by previous work even larger. This result suggests that additional processes may be needed to explain the discrepancy.
Syh, J; Patel, B; Syh, J; Wu, H; Rosen, L; Durci, M; Katz, S; Sibata, C
2012-06-01
To evaluate the characteristics of commercial-grade flatbed scanners and medical-grade scanners for radiochromic EBT film dosimetry. Performance aspects of a Vidar Dosimetry Pro Advantage (Red), Epson 750 Pro, Microtek ArtixScan 1800f, and Microtek ScanMaker 8700 scanner for EBT2 Gafchromic film were evaluated in the categories of repeatability, maximum distinguishable optical density (OD) differentiation, OD variance, and dose curve characteristics. OD step film by Stouffer Industries containing 31 steps ranging from 0.05 to 3.62 OD was used. EBT films were irradiated with dose ranging from 20 to 600 cGy in 6×6 cm 2 field sizes and analyzed 24 hours later using RIT113 and Tomotherapy Film Analyzer software. Scans were performed in transmissive mode, landscape orientation, 16-bit image. The mean and standard deviation Analog to Digital (A/D) scanner value was measured by selecting a 3×3 mm 2 uniform area in the central region of each OD step from a total of 20 scans performed over several weeks. Repeatability was determined from the variance of OD step 0.38. Maximum distinguishable OD was defined as the last OD step whose range of A/D values does not overlap with its neighboring step. Repeatability uncertainty ranged from 0.1% for Vidar to 4% for Epson. Average standard deviation of OD steps ranged from 0.21% for Vidar to 6.4% for ArtixScan 1800f. Maximum distinguishable optical density ranged from 3.38 for Vidar to 1.32 for ScanMaker 8700. A/D range of each OD step corresponds to a dose range. Dose ranges of OD steps varied from 1% for Vidar to 20% for ScanMaker 8700. The Vidar exhibited a dose curve that utilized a broader range of OD values than the other scanners. Vidar exhibited higher maximum distinguishable OD, smaller variance in repeatability, smaller A/D value deviation per OD step, and a shallower dose curve with respect to OD. © 2012 American Association of Physicists in Medicine.
Tests of alternative quantum theories with neutrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sponar, S.; Durstberger-Rennhofer, K.; Badurek, G.
2014-12-04
According to Bell’s theorem, every theory based on local realism is at variance with certain predictions of quantum mechanics. A theory that maintains realism but abandons reliance on locality, which has been proposed by Leggett, is incompatible with experimentally observable quantum correlations. In our experiment correlation measurements of spin-energy entangled single-neutrons violate a Leggett-type inequality by more than 7.6 standard deviations. The experimental data falsify the contextual realistic model and are fully in favor of quantum mechanics.
Schneider, Harald Jörn; Saller, Bernhard; Klotsche, Jens; März, Winfried; Erwa, Wolfgang; Wittchen, Hans-Ullrich; Stalla, Günter Karl
2006-05-01
Insulin-like growth factor-I (IGF-I) has been suggested to be a prognostic marker for the development of cancer and, more recently, cardiovascular disease. These diseases are closely linked to obesity, but reports of the association of IGF-I with measures of obesity are divergent. In this study, we assessed the association of age-dependent IGF-I standard deviation scores with body mass index (BMI) and intra-abdominal fat accumulation in a large population. A cross-sectional, epidemiological study. IGF-I levels were measured with an automated chemiluminescence assay system in 6282 patients from the DETECT study. Weight, height, and waist and hip circumference were measured according to the written instructions. Standard deviation scores (SDS), correcting IGF-I levels for age, were calculated and were used for further analyses. An inverse U-shaped association of IGF-I SDS with BMI, waist circumference, and the ratio of waist circumference to height was found. BMI was positively associated with IGF-I SDS in normal weight subjects, and negatively associated in obese subjects. The highest mean IGF-I SDS were seen at a BMI of 22.5-25 kg/m2 in men (+0.08), and at a BMI of 27.5-30 kg/m2 in women (+0.21). Multiple linear regression models, controlling for different diseases, medications and risk conditions, revealed a significant negative association of BMI with IGF-I SDS. BMI contributed most to the additional explained variance to the other health conditions. IGF-I standard deviation scores are decreased in obesity and underweight subjects. These interactions should be taken into account when analyzing the association of IGF-I with diseases and risk conditions.
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Litt, Jonathan S.
2010-01-01
This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.
Video denoising using low rank tensor decomposition
NASA Astrophysics Data System (ADS)
Gui, Lihua; Cui, Gaochao; Zhao, Qibin; Wang, Dongsheng; Cichocki, Andrzej; Cao, Jianting
2017-03-01
Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.
Effects of insertion speed and trocar stiffness on the accuracy of needle position for brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.
2012-04-15
Purpose: In prostate brachytherapy, accurate positioning of the needle tip to place radioactive seeds at its target site is critical for successful radiation treatment. During the procedure, needle deflection leads to seed misplacement and suboptimal radiation dose to cancerous cells. In practice, radiation oncologists commonly use high-speed hand needle insertion to minimize displacement of the prostate as well as the needle deflection. Effects of speed during needle insertion and stiffness of trocar (a solid rod inside the hollow cannula) on needle deflection are studied. Methods: Needle insertion experiments into phantom were performed using a 2{sup 2} factorial design (2 parametersmore » at 2 levels), with each condition having replicates. Analysis of the deflection data included calculating the average, standard deviation, and analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: The stiffer tungsten carbide trocar is effective in reducing the average and standard deviation of needle deflection. The fast insertion speed together with the stiffer trocar generated the smallest average and standard deviation for needle deflection for almost all cases. Conclusions: The combination of stiff tungsten carbide trocar and fast needle insertion speed are important to decreasing needle deflection. The knowledge gained from this study can be used to improve the accuracy of needle insertion during brachytherapy procedures.« less
Analysis of variability of tropical Pacific sea surface temperatures
NASA Astrophysics Data System (ADS)
Davies, Georgina; Cressie, Noel
2016-11-01
Sea surface temperature (SST) in the Pacific Ocean is a key component of many global climate models and the El Niño-Southern Oscillation (ENSO) phenomenon. We shall analyse SST for the period November 1981-December 2014. To study the temporal variability of the ENSO phenomenon, we have selected a subregion of the tropical Pacific Ocean, namely the Niño 3.4 region, as it is thought to be the area where SST anomalies indicate most clearly ENSO's influence on the global atmosphere. SST anomalies, obtained by subtracting the appropriate monthly averages from the data, are the focus of the majority of previous analyses of the Pacific and other oceans' SSTs. Preliminary data analysis showed that not only Niño 3.4 spatial means but also Niño 3.4 spatial variances varied with month of the year. In this article, we conduct an analysis of the raw SST data and introduce diagnostic plots (here, plots of variability vs. central tendency). These plots show strong negative dependence between the spatial standard deviation and the spatial mean. Outliers are present, so we consider robust regression to obtain intercept and slope estimates for the 12 individual months and for all-months-combined. Based on this mean-standard deviation relationship, we define a variance-stabilizing transformation. On the transformed scale, we describe the Niño 3.4 SST time series with a statistical model that is linear, heteroskedastic, and dynamical.
Genomic analysis of cow mortality and milk production using a threshold-linear model.
Tsuruta, S; Lourenco, D A L; Misztal, I; Lawlor, T J
2017-09-01
The objective of this study was to investigate the feasibility of genomic evaluation for cow mortality and milk production using a single-step methodology. Genomic relationships between cow mortality and milk production were also analyzed. Data included 883,887 (866,700) first-parity, 733,904 (711,211) second-parity, and 516,256 (492,026) third-parity records on cow mortality (305-d milk yields) of Holsteins from Northeast states in the United States. The pedigree consisted of up to 1,690,481 animals including 34,481 bulls genotyped with 36,951 SNP markers. Analyses were conducted with a bivariate threshold-linear model for each parity separately. Genomic information was incorporated as a genomic relationship matrix in the single-step BLUP. Traditional and genomic estimated breeding values (GEBV) were obtained with Gibbs sampling using fixed variances, whereas reliabilities were calculated from variances of GEBV samples. Genomic EBV were then converted into single nucleotide polymorphism (SNP) marker effects. Those SNP effects were categorized according to values corresponding to 1 to 4 standard deviations. Moving averages and variances of SNP effects were calculated for windows of 30 adjacent SNP, and Manhattan plots were created for SNP variances with the same window size. Using Gibbs sampling, the reliability for genotyped bulls for cow mortality was 28 to 30% in EBV and 70 to 72% in GEBV. The reliability for genotyped bulls for 305-d milk yields was 53 to 65% to 81 to 85% in GEBV. Correlations of SNP effects between mortality and 305-d milk yields within categories were the highest with the largest SNP effects and reached >0.7 at 4 standard deviations. All SNP regions explained less than 0.6% of the genetic variance for both traits, except regions close to the DGAT1 gene, which explained up to 2.5% for cow mortality and 4% for 305-d milk yields. Reliability for GEBV with a moderate number of genotyped animals can be calculated by Gibbs samples. Genomic information can greatly increase the reliability of predictions not only for milk but also for mortality. The existence of a common region on Bos taurus autosome 14 affecting both traits may indicate a major gene with a pleiotropic effect on milk and mortality. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Large deviations and portfolio optimization
NASA Astrophysics Data System (ADS)
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Increasing market efficiency in the stock markets
NASA Astrophysics Data System (ADS)
Yang, Jae-Suk; Kwak, Wooseop; Kaizoji, Taisei; Kim, In-Mook
2008-01-01
We study the temporal evolutions of three stock markets; Standard and Poor's 500 index, Nikkei 225 Stock Average, and the Korea Composite Stock Price Index. We observe that the probability density function of the log-return has a fat tail but the tail index has been increasing continuously in recent years. We have also found that the variance of the autocorrelation function, the scaling exponent of the standard deviation, and the statistical complexity decrease, but that the entropy density increases as time goes over time. We introduce a modified microscopic spin model and simulate the model to confirm such increasing and decreasing tendencies in statistical quantities. These findings indicate that these three stock markets are becoming more efficient.
A Simple Approach for Monitoring Business Service Time Variation
2014-01-01
Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart) and an asymmetric EWMA mean chart (EWMA-AM chart) based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation) and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended. PMID:24895647
A simple approach for monitoring business service time variation.
Yang, Su-Fen; Arnold, Barry C
2014-01-01
Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart) and an asymmetric EWMA mean chart (EWMA-AM chart) based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation) and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended.
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Measurement of the 20F half-life
NASA Astrophysics Data System (ADS)
Hughes, M.; George, E. A.; Naviliat-Cuncic, O.; Voytas, P. A.; Chandavar, S.; Gade, A.; Huyan, X.; Liddick, S. N.; Minamisono, K.; Paulauskas, S. V.; Weisshaar, D.
2018-05-01
The half-life of the 20F ground state was measured using a radioactive beam implanted in a plastic scintillator and recording β γ coincidences together with four CsI(Na) detectors. The result, T1 /2=11.0011 (69) stat(30) sys s, is at variance by 17 combined standard deviations with the two most precise results. The present value revives the poor consistency of results for this half-life and calls for a new measurement, with a technique having different sources of systematic effects, to clarify the discrepancy.
WASP (Write a Scientific Paper) using Excel - 7: The t-distribution.
Grech, Victor
2018-03-01
The calculation of descriptive statistics after data collection provides researchers with an overview of the shape and nature of their datasets, along with basic descriptors, and may help identify true or incorrect outlier values. This exercise should always precede inferential statistics, when possible. This paper provides some pointers for doing so in Microsoft Excel, both statically and dynamically, with Excel's functions, including the calculation of standard deviation and variance and the relevance of the t-distribution. Copyright © 2018 Elsevier B.V. All rights reserved.
1986-03-01
Directly from Sample Bid VI-16 Example 3 VI-16 Determining the Zero Price Qiantity Demanded VI-26 Summary VI -31 CHAPrER VII, THE DETERMINATION OF NED...While the standard deviation and variance are absolute measures of dispersion, a relative measure of dispersion can also be computed. This measure is...refers to the closeness of fit between the estimates obtained from Zli e and the true population value. The only way of being absolutely i: o-.iat the
Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region
NASA Astrophysics Data System (ADS)
Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.
2005-08-01
Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.
Analytical probabilistic modeling of RBE-weighted dose for ion therapy.
Wieser, H P; Hennig, P; Wahl, N; Bangert, M
2017-11-10
Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order [Formula: see text] to [Formula: see text] for the expectation value and from [Formula: see text] to [Formula: see text] for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are [Formula: see text]99.15% for the expectation value and [Formula: see text]94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.
Analytical probabilistic modeling of RBE-weighted dose for ion therapy
NASA Astrophysics Data System (ADS)
Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.
2017-12-01
Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.
Non-additive genetic variation in growth, carcass and fertility traits of beef cattle.
Bolormaa, Sunduimijid; Pryce, Jennie E; Zhang, Yuandan; Reverter, Antonio; Barendse, William; Hayes, Ben J; Goddard, Michael E
2015-04-02
A better understanding of non-additive variance could lead to increased knowledge on the genetic control and physiology of quantitative traits, and to improved prediction of the genetic value and phenotype of individuals. Genome-wide panels of single nucleotide polymorphisms (SNPs) have been mainly used to map additive effects for quantitative traits, but they can also be used to investigate non-additive effects. We estimated dominance and epistatic effects of SNPs on various traits in beef cattle and the variance explained by dominance, and quantified the increase in accuracy of phenotype prediction by including dominance deviations in its estimation. Genotype data (729 068 real or imputed SNPs) and phenotypes on up to 16 traits of 10 191 individuals from Bos taurus, Bos indicus and composite breeds were used. A genome-wide association study was performed by fitting the additive and dominance effects of single SNPs. The dominance variance was estimated by fitting a dominance relationship matrix constructed from the 729 068 SNPs. The accuracy of predicted phenotypic values was evaluated by best linear unbiased prediction using the additive and dominance relationship matrices. Epistatic interactions (additive × additive) were tested between each of the 28 SNPs that are known to have additive effects on multiple traits, and each of the other remaining 729 067 SNPs. The number of significant dominance effects was greater than expected by chance and most of them were in the direction that is presumed to increase fitness and in the opposite direction to inbreeding depression. Estimates of dominance variance explained by SNPs varied widely between traits, but had large standard errors. The median dominance variance across the 16 traits was equal to 5% of the phenotypic variance. Including a dominance deviation in the prediction did not significantly increase its accuracy for any of the phenotypes. The number of additive × additive epistatic effects that were statistically significant was greater than expected by chance. Significant dominance and epistatic effects occur for growth, carcass and fertility traits in beef cattle but they are difficult to estimate precisely and including them in phenotype prediction does not increase its accuracy.
A Database of Herbaceous Vegetation Responses to Elevated Atmospheric CO2 (NDP-073)
Jones, Michael H [The Ohio State Univ., Columbus, OH (United States); Curtis, Peter S [The Ohio State Univ., Columbus, OH (United States); Cushman, Robert M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brenkert, Antoinette L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
1999-01-01
To perform a statistically rigorous meta-analysis of research results on the response by herbaceous vegetation to increased atmospheric CO2 levels, a multiparameter database of responses was compiled from the published literature. Seventy-eight independent CO2-enrichment studies, covering 53 species and 26 response parameters, reported mean response, sample size, and variance of the response (either as standard deviation or standard error). An additional 43 studies, covering 25 species and 6 response parameters, did not report variances. This numeric data package accompanies the Carbon Dioxide Information Analysis Center's (CDIAC's) NDP- 072, which provides similar information for woody vegetation. This numeric data package contains a 30-field data set of CO2- exposure experiment responses by herbaceous plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO2-exposure experiments and specific comments relevant to the data in the data sets, and this documentation file (which includes SAS and Fortran codes to read the ASCII data file; SAS is a registered trademark of the SAS Institute, Inc., Cary, North Carolina 27511).
Motor equivalence during multi-finger accurate force production
Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.
2014-01-01
We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
The geometry of proliferating dicot cells.
Korn, R W
2001-02-01
The distributions of cell size and cell cycle duration were studied in two-dimensional expanding plant tissues. Plastic imprints of the leaf epidermis of three dicot plants, jade (Crassula argentae), impatiens (Impatiens wallerana), and the common begonia (Begonia semperflorens) were made and cell outlines analysed. The average, standard deviation and coefficient of variance (CV = 100 x standard deviation/average) of cell size were determined with the CV of mother cells less than the CV for daughter cells and both are less than that for all cells. An equation was devised as a simple description of the probability distribution of sizes for all cells of a tissue. Cell cycle durations as measured in arbitrary time units were determined by reconstructing the initial and final sizes of cells and they collectively give the expected asymmetric bell-shaped probability distribution. Given the features of unequal cell division (an average of 11.6% difference in size of daughter cells) and the size variation of dividing cells, it appears that the range of cell size is more critically regulated than the size of a cell at any particular time.
Quantum speed limit time in a magnetic resonance
NASA Astrophysics Data System (ADS)
Ivanchenko, E. A.
2017-12-01
A visualization for dynamics of a qudit spin vector in a time-dependent magnetic field is realized by means of mapping a solution for a spin vector on the three-dimensional spherical curve (vector hodograph). The obtained results obviously display the quantum interference of precessional and nutational effects on the spin vector in the magnetic resonance. For any spin the bottom bounds of the quantum speed limit time (QSL) are found. It is shown that the bottom bound goes down when using multilevel spin systems. Under certain conditions the non-nil minimal time, which is necessary to achieve the orthogonal state from the initial one, is attained at spin S = 2. An estimation of the product of two and three standard deviations of the spin components are presented. We discuss the dynamics of the mutual uncertainty, conditional uncertainty and conditional variance in terms of spin standard deviations. The study can find practical applications in the magnetic resonance, 3D visualization of computational data and in designing of optimized information processing devices for quantum computation and communication.
NASA Astrophysics Data System (ADS)
Fredriksen, H. B.; Løvsletten, O.; Rypdal, M.; Rypdal, K.
2014-12-01
Several research groups around the world collect instrumental temperature data and combine them in different ways to obtain global gridded temperature fields. The three most well known datasets are HadCRUT4 produced by the Climatic Research Unit and the Met Office Hadley Centre in UK, one produced by NASA GISS, and one produced by NOAA. Recently Berkeley Earth has also developed a gridded dataset. All these four will be compared in our analysis. The statistical properties we will focus on are the standard deviation and the Hurst exponent. These two parameters are sufficient to describe the temperatures as long-range memory stochastic processes; the standard deviation describes the general fluctuation level, while the Hurst exponent relates the strength of the long-term variability to the strength of the short-term variability. A higher Hurst exponent means that the slow variations are stronger compared to the fast, and that the autocovariance function will have a stronger tail. Hence the Hurst exponent gives us information about the persistence or memory of the process. We make use of these data to show that data averaged over a larger area exhibit higher Hurst exponents and lower variance than data averaged over a smaller area, which provides information about the relationship between temporal and spatial correlations of the temperature fluctuations. Interpolation in space has some similarities with averaging over space, although interpolation is more weighted towards the measurement locations. We demonstrate that the degree of spatial interpolation used can explain some differences observed between the variances and memory exponents computed from the various datasets.
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less
SU-F-T-564: 3 Year Experience of Treatment Plan QualityAssurance for Vero SBRT Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Z; Li, Z; Mamalui, M
2016-06-15
Purpose: To verify treatment plan monitor units from iPlan treatment planning system for Vero Stereotactic Body Radiotherapy (SBRT) treatment using both software-based and (homogeneous and heterogeneous) phantom-based approaches. Methods: Dynamic conformal arcs (DCA) were used for SBRT treatment of oligometastasis patients using Vero linear accelerator. For each plan, Monte Carlo calculated treatment plans MU (prescribed dose to water with 1% variance) is verified first by RadCalc software with 3% difference threshold. Beyond 3% differences, treatment plans were copied onto (homogeneous) Scanditronix phantom for non-lung patients and copied onto (heterogeneous) CIRS phantom for lung patients and the corresponding plan dose wasmore » measured using a cc01 ion chamber. The difference between the planed and measured dose was recorded. For the past 3 years, we have treated 180 patients with 315 targets. Out of these patients, 99 targets treatment plan RadCalc calculation exceeded 3% threshold and phantom based measurements were performed with 26 plans using Scanditronix phantom and 73 plans using CIRS phantom. Mean and standard deviation of the dose differences were obtained and presented. Results: For all patient RadCalc calculations, the mean dose difference is 0.76% with a standard deviation of 5.97%. For non-lung patient plan Scanditronix phantom measurements, the mean dose difference is 0.54% with standard deviation of 2.53%; for lung patient plan CIRS phantom measurements, the mean dose difference is −0.04% with a standard deviation of 1.09%; The maximum dose difference is 3.47% for Scanditronix phantom measurements and 3.08% for CIRS phantom measurements. Conclusion: Limitations in secondary MU check software lead to perceived large dose discrepancies for some of the lung patient SBRT treatment plans. Homogeneous and heterogeneous phantoms were used in plan quality assurance for non-lung patients and lung patients, respectively. Phantom based QA showed the relative good agreement between iPlan calculated dose and measured dose.« less
MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method
Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.
2003-01-01
A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).
Waldmann, P; García-Gil, M R; Sillanpää, M J
2005-06-01
Comparison of the level of differentiation at neutral molecular markers (estimated as F(ST) or G(ST)) with the level of differentiation at quantitative traits (estimated as Q(ST)) has become a standard tool for inferring that there is differential selection between populations. We estimated Q(ST) of timing of bud set from a latitudinal cline of Pinus sylvestris with a Bayesian hierarchical variance component method utilizing the information on the pre-estimated population structure from neutral molecular markers. Unfortunately, the between-family variances differed substantially between populations that resulted in a bimodal posterior of Q(ST) that could not be compared in any sensible way with the unimodal posterior of the microsatellite F(ST). In order to avoid publishing studies with flawed Q(ST) estimates, we recommend that future studies should present heritability estimates for each trait and population. Moreover, to detect variance heterogeneity in frequentist methods (ANOVA and REML), it is of essential importance to check also that the residuals are normally distributed and do not follow any systematically deviating trends.
[Do we always correctly interpret the results of statistical nonparametric tests].
Moczko, Jerzy A
2014-01-01
Mann-Whitney, Wilcoxon, Kruskal-Wallis and Friedman tests create a group of commonly used tests to analyze the results of clinical and laboratory data. These tests are considered to be extremely flexible and their asymptotic relative efficiency exceeds 95 percent. Compared with the corresponding parametric tests they do not require checking the fulfillment of the conditions such as the normality of data distribution, homogeneity of variance, the lack of correlation means and standard deviations, etc. They can be used both in the interval and or-dinal scales. The article presents an example Mann-Whitney test, that does not in any case the choice of these four nonparametric tests treated as a kind of gold standard leads to correct inference.
NASA Astrophysics Data System (ADS)
Vicente-Serrano, Sergio M.; Van der Schrier, Gerard; Beguería, Santiago; Azorin-Molina, Cesar; Lopez-Moreno, Juan-I.
2015-07-01
In this study we analyzed the sensitivity of four drought indices to precipitation (P) and reference evapotranspiration (ETo) inputs. The four drought indices are the Palmer Drought Severity Index (PDSI), the Reconnaissance Drought Index (RDI), the Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Palmer Drought Index (SPDI). The analysis uses long-term simulated series with varying averages and variances, as well as global observational data to assess the sensitivity to real climatic conditions in different regions of the World. The results show differences in the sensitivity to ETo and P among the four drought indices. The PDSI shows the lowest sensitivity to variation in their climate inputs, probably as a consequence of the standardization procedure of soil water budget anomalies. The RDI is only sensitive to the variance but not to the average of P and ETo. The SPEI shows the largest sensitivity to ETo variation, with clear geographic patterns mainly controlled by aridity. The low sensitivity of the PDSI to ETo makes the PDSI perhaps less apt as the suitable drought index in applications in which the changes in ETo are most relevant. On the contrary, the SPEI shows equal sensitivity to P and ETo. It works as a perfect supply and demand system modulated by the average and standard deviation of each series and combines the sensitivity of the series to changes in magnitude and variance. Our results are a robust assessment of the sensitivity of drought indices to P and ETo variation, and provide advice on the use of drought indices to detect climate change impacts on drought severity under a wide variety of climatic conditions.
Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye
2016-01-13
A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.
Mineral composition of Atriplex hymenelytra growing in the northern Mojave Desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, A.; Romney, E.M.; Hunter, R.B.
1980-01-01
Fifty samples of Atriplex hymenelytra (Torr.) S. Wats. were collected from several different locations in southern Nevada and California to test variability in mineral composition. Only Na, V, P, Ca, Mg, Mn, and Sr in the samples appeared to represent a uniform population resulting in normal curves for frequency distribution. Even so, about 40 percent of the variance for these elements was due to location. All elements differed enough with location so that no element really represented a uniform population. The coefficient of variation for most elements was over 40 percent and one was over 100 percent. The proportion ofmore » variance due to analytical variation averaged 16.2 +- 13.1 percent (standard deviation), that due to location was 43.0 +- 13.4 percent, and that due to variation of plants within location was 40.7 +- 13.0 percent.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Church, J; Slaughter, D; Norman, E
Error rates in a cargo screening system such as the Nuclear Car Wash [1-7] depend on the standard deviation of the background radiation count rate. Because the Nuclear Car Wash is an active interrogation technique, the radiation signal for fissile material must be detected above a background count rate consisting of cosmic, ambient, and neutron-activated radiations. It was suggested previously [1,6] that the Corresponding negative repercussions for the sensitivity of the system were shown. Therefore, to assure the most accurate estimation of the variation, experiments have been performed to quantify components of the actual variance in the background count rate,more » including variations in generator power, irradiation time, and container contents. The background variance is determined by these experiments to be a factor of 2 smaller than values assumed in previous analyses, resulting in substantially improved projections of system performance for the Nuclear Car Wash.« less
Determining the bias and variance of a deterministic finger-tracking algorithm.
Morash, Valerie S; van der Velden, Bas H M
2016-06-01
Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel
ERIC Educational Resources Information Center
Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying
2017-01-01
In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…
Nittrouer, Susan; Sansom, Emily; Low, Keri; Rice, Caitlin; Caldwell-Tarr, Amanda
2014-01-01
Listeners use their knowledge of how language is structured to aid speech recognition in everyday communication. When it comes to children with congenital hearing loss severe enough to warrant cochlear implants (CIs), the question arises of whether these children can acquire the language knowledge needed to aid speech recognition, in spite of only having spectrally degraded signals available to them. That question was addressed in the present study. Specifically, there were three goals: (1) to compare the language structures used by children with CIs to those of children with normal hearing (NH); (2) to assess the amount of variance in the language measures explained by phonological awareness and lexical knowledge; and (3) to assess the amount of variance in the language measures explained by factors related to the hearing loss itself and subsequent treatment. Language samples were obtained and transcribed for 40 children who had just completed kindergarten: 19 with NH and 21 with CIs. Five measures were derived from Systematic Analysis of Language Transcripts: (1) mean length of utterance in morphemes, (2) number of conjunctions, excluding and, (3) number of personal pronouns, (4) number of bound morphemes, and (5) number of different words. Measures were also collected on phonological awareness and lexical knowledge. Statistics examined group differences, as well as the amount of variance in the language measures explained by phonological awareness, lexical knowledge, and factors related to hearing loss and its treatment for children with CIs. Mean scores of children with CIs were roughly one standard deviation below those of children with NH on all language measures, including lexical knowledge, matching outcomes of other studies. Mean scores of children with CIs were closer to two standard deviations below those of children with NH on two out of three measures of phonological awareness (specifically those related to phonemic structure). Lexical knowledge explained significant amounts of variance on three language measures, but only one measure of phonological awareness (sensitivity to word-final phonemic structure) explained any significant amount of unique variance beyond that, and on only one language measure (number of bound morphemes). Age at first implant, but no other factors related to hearing loss or its treatment, explained significant amounts of variance on the language measures, as well. In spite of early intervention and advances in implant technology, children with CIs are still delayed in learning language, but grammatical knowledge is less affected than phonological awareness. Because there was little contribution to language development measured for phonological awareness independent of lexical knowledge, it was concluded that children with CIs could benefit from intervention focused specifically on helping them learn language structures, in spite of the likely phonological deficits they experience as a consequence of having degraded inputs.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Shimamura, Tomoko; Sumikura, Yoshihiro; Yamazaki, Takeshi; Tada, Atsuko; Kashiwagi, Takehiro; Ishikawa, Hiroya; Matsui, Toshiro; Sugimoto, Naoki; Akiyama, Hiroshi; Ukeda, Hiroyuki
2014-01-01
An inter-laboratory evaluation study was conducted in order to evaluate the antioxidant capacity of food additives by using a 1,1-diphenyl-2-picrylhydrazyl (DPPH) assay. Four antioxidants used as existing food additives (i.e., tea extract, grape seed extract, enju extract, and d-α-tocopherol) and 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid (Trolox) were used as analytical samples, and 14 laboratories participated in this study. The repeatability relative standard deviation (RSD(r)) of the IC50 of Trolox, four antioxidants, and the Trolox equivalent antioxidant capacity (TEAC) were 1.8-2.2%, 2.2-2.9%, and 2.1-2.5%, respectively. Thus, the proposed DPPH assay showed good performance within the same laboratory. The reproducibility relative standard deviation (RSD(R)) of IC50 of Trolox, four antioxidants, and TEAC were 4.0-7.9%, 6.0-11%, and 3.7-9.3%, respectively. The RSD(R)/RSD(r) values of TEAC were lower than, or nearly equal to, those of IC50 of the four antioxidants, suggesting that the use of TEAC was effective for reducing the variance among the laboratories. These results showed that the proposed DPPH assay could be used as a standard method to evaluate the antioxidant capacity of food additives.
Vitezica, Zulma G; Varona, Luis; Legarra, Andres
2013-12-01
Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or "breeding" values of individuals are generated by substitution effects, which involve both "biological" additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the "genotypic" value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts.
Evaluation of internal noise methods for Hotelling observers
NASA Astrophysics Data System (ADS)
Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.
2005-04-01
Including internal noise in computer model observers to degrade model observer performance to human levels is a common method to allow for quantitatively comparisons of human and model performance. In this paper, we studied two different types of methods for injecting internal noise to Hotelling model observers. The first method adds internal noise to the output of the individual channels: a) Independent non-uniform channel noise, b) Independent uniform channel noise. The second method adds internal noise to the decision variable arising from the combination of channel responses: a) internal noise standard deviation proportional to decision variable's standard deviation due to the external noise, b) internal noise standard deviation proportional to decision variable's variance caused by the external noise. We tested the square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO). The studied task was detection of a filling defect of varying size/shape in one of four simulated arterial segment locations with real x-ray angiography backgrounds. Results show that the internal noise method that leads to the best prediction of human performance differs across the studied models observers. The CHO model best predicts human observer performance with the channel internal noise. The HO and LGHO best predict human observer performance with the decision variable internal noise. These results might help explain why previous studies have found different results on the ability of each Hotelling model to predict human performance. Finally, the present results might guide researchers with the choice of method to include internal noise into their Hotelling models.
Refined multiscale fuzzy entropy based on standard deviation for biomedical signal analysis.
Azami, Hamed; Fernández, Alberto; Escudero, Javier
2017-11-01
Multiscale entropy (MSE) has been a prevalent algorithm to quantify the complexity of biomedical time series. Recent developments in the field have tried to alleviate the problem of undefined MSE values for short signals. Moreover, there has been a recent interest in using other statistical moments than the mean, i.e., variance, in the coarse-graining step of the MSE. Building on these trends, here we introduce the so-called refined composite multiscale fuzzy entropy based on the standard deviation (RCMFE σ ) and mean (RCMFE μ ) to quantify the dynamical properties of spread and mean, respectively, over multiple time scales. We demonstrate the dependency of the RCMFE σ and RCMFE μ , in comparison with other multiscale approaches, on several straightforward signal processing concepts using a set of synthetic signals. The results evidenced that the RCMFE σ and RCMFE μ values are more stable and reliable than the classical multiscale entropy ones. We also inspect the ability of using the standard deviation as well as the mean in the coarse-graining process using magnetoencephalograms in Alzheimer's disease and publicly available electroencephalograms recorded from focal and non-focal areas in epilepsy. Our results indicated that when the RCMFE μ cannot distinguish different types of dynamics of a particular time series at some scale factors, the RCMFE σ may do so, and vice versa. The results showed that RCMFE σ -based features lead to higher classification accuracies in comparison with the RCMFE μ -based ones. We also made freely available all the Matlab codes used in this study at http://dx.doi.org/10.7488/ds/1477 .
Nasal airway and septal variation in unilateral and bilateral cleft lip and palate.
Starbuck, John M; Friel, Michael T; Ghoneima, Ahmed; Flores, Roberto L; Tholpady, Sunil; Kula, Katherine
2014-10-01
Cleft lip and palate (CLP) affects the dentoalveolar and nasolabial facial regions. Internal and external nasal dysmorphology may persist in individuals born with CLP despite surgical interventions. 7-18 year old individuals born with unilateral and bilateral CLP (n = 50) were retrospectively assessed using cone beam computed tomography. Anterior, middle, and posterior nasal airway volumes were measured on each facial side. Septal deviation was measured at the anterior and posterior nasal spine, and the midpoint between these two locations. Data were evaluated using principal components analysis (PCA), multivariate analysis of variance (MANOVA), and post-hoc ANOVA tests. PCA results show partial separation in high dimensional space along PC1 (48.5% variance) based on age groups and partial separation along PC2 (29.8% variance) based on CLP type and septal deviation patterns. MANOVA results indicate that age (P = 0.007) and CLP type (P ≤ 0.001) significantly affect nasal airway volume and septal deviation. ANOVA results indicate that anterior nasal volume is significantly affected by age (P ≤ 0.001), whereas septal deviation patterns are significantly affected by CLP type (P ≤ 0.001). Age and CLP type affect nasal airway volume and septal deviation patterns. Nasal airway volumes tend to be reduced on the clefted sides of the face relative to non-clefted sides of the face. Nasal airway volumes tend to strongly increase with age, whereas septal deviation values tend to increase only slightly with age. These results suggest that functional nasal breathing may be impaired in individuals born with the unilateral and bilateral CLP deformity. © 2014 Wiley Periodicals, Inc.
Are the Stress Drops of Small Earthquakes Good Predictors of the Stress Drops of Larger Earthquakes?
NASA Astrophysics Data System (ADS)
Hardebeck, J.
2017-12-01
Uncertainty in PSHA could be reduced through better estimates of stress drop for possible future large earthquakes. Studies of small earthquakes find spatial variability in stress drop; if large earthquakes have similar spatial patterns, their stress drops may be better predicted using the stress drops of small local events. This regionalization implies the variance with respect to the local mean stress drop may be smaller than the variance with respect to the global mean. I test this idea using the Shearer et al. (2006) stress drop catalog for M1.5-3.1 events in southern California. I apply quality control (Hauksson, 2015) and remove near-field aftershocks (Wooddell & Abrahamson, 2014). The standard deviation of the distribution of the log10 stress drop is reduced from 0.45 (factor of 3) to 0.31 (factor of 2) by normalizing each event's stress drop by the local mean. I explore whether a similar variance reduction is possible when using the Shearer catalog to predict stress drops of larger southern California events. For catalogs of moderate-sized events (e.g. Kanamori, 1993; Mayeda & Walter, 1996; Boyd, 2017), normalizing by the Shearer catalog's local mean stress drop does not reduce the standard deviation compared to the unmodified stress drops. I compile stress drops of larger events from the literature, and identify 15 M5.5-7.5 earthquakes with at least three estimates. Because of the wide range of stress drop estimates for each event, and the different techniques and assumptions, it is difficult to assign a single stress drop value to each event. Instead, I compare the distributions of stress drop estimates for pairs of events, and test whether the means of the distributions are statistically significantly different. The events divide into 3 categories: low, medium, and high stress drop, with significant differences in mean stress drop between events in the low and the high stress drop categories. I test whether the spatial patterns of the Shearer catalog stress drops can predict the categories of the 15 events. I find that they cannot, rather the large event stress drops are uncorrelated with the local mean stress drop from the Shearer catalog. These results imply that the regionalization of stress drops of small events does not extend to the larger events, at least with current standard techniques of stress drop estimation.
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-01-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight. PMID:12019254
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-05-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight.
Ramsey, Elijah W.; Nelson, G.
2005-01-01
To maximize the spectral distinctiveness (information) of the canopy reflectance, an atmospheric correction strategy was implemented to provide accurate estimates of the intrinsic reflectance from the Earth Observing 1 (EO1) satellite Hyperion sensor signal. In rendering the canopy reflectance, an estimate of optical depth derived from a measurement of downwelling irradiance was used to drive a radiative transfer simulation of atmospheric scattering and attenuation. During the atmospheric model simulation, the input whole-terrain background reflectance estimate was changed to minimize the differences between the model predicted and the observed canopy reflectance spectra at 34 sites. Lacking appropriate spectrally invariant scene targets, inclusion of the field and predicted comparison maximized the model accuracy and, thereby, the detail and precision in the canopy reflectance necessary to detect low percentage occurrences of invasive plants. After accounting for artifacts surrounding prominent absorption features from about 400nm to 1000nm, the atmospheric adjustment strategy correctly explained 99% of the observed canopy reflectance spectra variance. Separately, model simulation explained an average of 88%??9% of the observed variance in the visible and 98% ?? 1% in the near-infrared wavelengths. In the 34 model simulations, maximum differences between the observed and predicted reflectances were typically less than ?? 1% in the visible; however, maximum reflectance differences higher than ?? 1.6% (?2.3%) at more than a few wavelengths were observed at three sites. In the near-infrared wavelengths, maximum reflectance differences remained less than ??3% for 68% of the comparisons (??1 standard deviation) and less than ??6% for 95% of the comparisons (??2 standard deviation). Higher reflectance differences in the visible and near-infrared wavelengths were most likely associated with problems in the comparison, not in the model generation. ?? 2005 US Government.
Age-related variation and predictors of long-term quality of life in germ cell tumor survivors.
Hartung, Tim J; Mehnert, Anja; Friedrich, Michael; Hartmann, Michael; Vehling, Sigrun; Bokemeyer, Carsten; Oechsle, Karin
2016-02-01
To compare long-term health-related quality of life (QoL) in germ cell tumor survivors (GCTS) and age-adjusted men and to identify predictors of variation in long-term QoL in GCTS. We used the Short-Form Health Survey to measure QoL in a cross-sectional sample of 164 survivors of germ cell tumors from Hamburg, Germany. QoL was compared with age-adjusted German norm data. Sociodemographic and medical data from questionnaires and medical records were used to find predictors of QoL. On average, patients were 44.4 years old (standard deviation = 9.6 y) and average time since first germ cell tumor diagnosis was 11.6 years (standard deviation = 7.3 y). We found significantly lower mental component scores in GCTS when compared with norm data (Hedges g =-0.44, P<0.001). An exploratory analysis by age group showed the largest difference in mental QoL in survivors aged 31 to 40 years (Hedges g =-0.67). Linear regression analysis revealed age (β =-0.46, P<0.001), marital status (β = 0.20, P = 0.024), advanced secondary qualifications (β =-0.25, P = 0.001), time since diagnosis (β = 0.17, P = 0.031), and tumor stage (β = 0.17, P = 0.024) as statistically significant predictors of the physical component score, accounting for 22% of the variance. Statistically significant predictors of the mental component score were higher secondary qualifications (β = 0.17, P = 0.033) and unemployment (β =-0.21, P = 0.009), accounting for 6% of the variance. Survivors of germ cell tumors can expect an overall long-term QoL similar to that of other men of their age. Copyright © 2016 Elsevier Inc. All rights reserved.
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
NASA Astrophysics Data System (ADS)
Aguirre, E. E.; Karchewski, B.
2017-12-01
DC resistivity surveying is a geophysical method that quantifies the electrical properties of the subsurface of the earth by applying a source current between two electrodes and measuring potential differences between electrodes at known distances from the source. Analytical solutions for a homogeneous half-space and simple subsurface models are well known, as the former is used to define the concept of apparent resistivity. However, in situ properties are heterogeneous meaning that simple analytical models are only an approximation, and ignoring such heterogeneity can lead to misinterpretation of survey results costing time and money. The present study examines the extent to which random variations in electrical properties (i.e. electrical conductivity) affect potential difference readings and therefore apparent resistivities, relative to an assumed homogeneous subsurface model. We simulate the DC resistivity survey using a Finite Difference (FD) approximation of an appropriate simplification of Maxwell's equations implemented in Matlab. Electrical resistivity values at each node in the simulation were defined as random variables with a given mean and variance, and are assumed to follow a log-normal distribution. The Monte Carlo analysis for a given variance of electrical resistivity was performed until the mean and variance in potential difference measured at the surface converged. Finally, we used the simulation results to examine the relationship between variance in resistivity and variation in surface potential difference (or apparent resistivity) relative to a homogeneous half-space model. For relatively low values of standard deviation in the material properties (<10% of mean), we observed a linear correlation between variance of resistivity and variance in apparent resistivity.
NASA Astrophysics Data System (ADS)
Monaghan, Kari L.
The problem addressed was the concern for aircraft safety rates as they relate to the rate of maintenance outsourcing. Data gathered from 14 passenger airlines: AirTran, Alaska, America West, American, Continental, Delta, Frontier, Hawaiian, JetBlue, Midwest, Northwest, Southwest, United, and USAir covered the years 1996 through 2008. A quantitative correlational design, utilizing Pearson's correlation coefficient, and the coefficient of determination were used in the present study to measure the correlation between variables. Elements of passenger airline aircraft maintenance outsourcing and aircraft accidents, incidents, and pilot deviations within domestic passenger airline operations were analyzed, examined, and evaluated. Rates of maintenance outsourcing were analyzed to determine the association with accident, incident, and pilot deviation rates. Maintenance outsourcing rates used in the evaluation were the yearly dollar expenditure of passenger airlines for aircraft maintenance outsourcing as they relate to the total airline aircraft maintenance expenditures. Aircraft accident, incident, and pilot deviation rates used in the evaluation were the yearly number of accidents, incidents, and pilot deviations per miles flown. The Pearson r-values were calculated to measure the linear relationship strength between the variables. There were no statistically significant correlation findings for accidents, r(174)=0.065, p=0.393, and incidents, r(174)=0.020, p=0.793. However, there was a statistically significant correlation for pilot deviation rates, r(174)=0.204, p=0.007 thus indicating a statistically significant correlation between maintenance outsourcing rates and pilot deviation rates. The calculated R square value of 0.042 represents the variance that can be accounted for in aircraft pilot deviation rates by examining the variance in aircraft maintenance outsourcing rates; accordingly, 95.8% of the variance is unexplained. Suggestions for future research include replication of the present study with the inclusion of maintenance outsourcing rate data for all airlines differentiated between domestic and foreign repair station utilization. Replication of the present study every five years is also encouraged to continue evaluating the impact of maintenance outsourcing practices on passenger airline safety.
Polar motion results from GEOS 3 laser ranging
NASA Technical Reports Server (NTRS)
Schutz, B. E.; Tapley, B. D.; Ries, J.; Eanes, R.
1979-01-01
The observability of polar motion from laser range data has been investigated, and the contributions from the dynamical and kinematical effects have been evaluated. Using 2-day arcs with GEOS 3 laser data, simultaneous solutions for pole position components and orbit elements have been obtained for a 2-week interval spanning August 27 to September 10, 1975, using three NASA Goddard Space Flight Center stations located at Washington, D.C., Bermuda, and Grand Turk. The results for the y-component of pole position from this limited data set differenced with the BIH linearly interpolated values yield a mean of 39 cm and a standard deviation of 1.07 m. Consideration of the variance associated with each estimate yields a mean of 20 cm and a standard deviation of 81 cm. The results for the x-component of pole position indicate that the mean value is in fair agreement with the BIH; however, the x-coordinate determination is weaker than the y-coordinate determination due to the distribution of laser sites (all three are between 77 deg W and 65 deg W) which results in greater sensitivity to the data distribution. In addition, the sensitivity of these results to various model parameters is discussed.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Juhel-Gaugain, M; McEvoy, J D; VanGinkel, L A
2000-12-01
The experimental design of a material certification programme is described. The matrix reference materials (RMs) comprised chlortetracycline (CTC)-containing and CTC-free lyophilised porcine liver, kidney and muscle produced under the European Commission's Standards Measurements and Testing (SMT) programme. The aim of the certification programme was to determine accurately and precisely the concentration of CTC and 4-epi-chlortetracycline (epi-CTC) contained in the RMs. A multi-laboratory approach was used to certify analyte concentrations. Participants (n = 19) were instructed to strictly adhere to previously established guidelines. Following the examination of analytical performance criteria, statistical manipulation of results submitted by 13 laboratories, (6 withdrew) allowed an estimate to be made of the true value of the analyte content. The Nalimov test was used for detection of outlying results. The Cochran and Bartlett tests were employed for testing the homogeneity of variances. The normality of results distribution was tested according to the Kolmogorov-Smirnov-Lilliefors test. One-way analysis of variance (ANOVA) was employed to calculate the within and between-laboratory standard deviations, the overall mean and confidence interval for the CTC and epi-CTC content of each of the RMs. Certified values were within or very close to the target concentration ranges specified in the SMT contract. These studies have demonstrated the successful production and certification of CTC-containing and CTC-free porcine RMs.
Choi, Young Jun; Lee, Jeong Hyun; Kim, Hye Ok; Kim, Dae Yoon; Yoon, Ra Gyoung; Cho, So Hyun; Koh, Myeong Ju; Kim, Namkug; Kim, Sang Yoon; Baek, Jung Hwan
2016-01-01
To explore the added value of histogram analysis of apparent diffusion coefficient (ADC) values over magnetic resonance (MR) imaging and fluorine 18 ((18)F) fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) for the detection of occult palatine tonsil squamous cell carcinoma (SCC) in patients with cervical nodal metastasis from a cancer of an unknown primary site. The institutional review board approved this retrospective study, and the requirement for informed consent was waived. Differences in the bimodal histogram parameters of the ADC values were assessed among occult palatine tonsil SCC (n = 19), overt palatine tonsil SCC (n = 20), and normal palatine tonsils (n = 20). One-way analysis of variance was used to analyze differences among the three groups. Receiver operating characteristic curve analysis was used to determine the best differentiating parameters. The increased sensitivity of histogram analysis over MR imaging and (18)F-FDG PET/CT for the detection of occult palatine tonsil SCC was evaluated as added value. Histogram analysis showed statistically significant differences in the mean, standard deviation, and 50th and 90th percentile ADC values among the three groups (P < .0045). Occult palatine tonsil SCC had a significantly higher standard deviation for the overall curves, mean and standard deviation of the higher curves, and 90th percentile ADC value, compared with normal palatine tonsils (P < .0167). Receiver operating characteristic curve analysis showed that the standard deviation of the overall curve best delineated occult palatine tonsil SCC from normal palatine tonsils, with a sensitivity of 78.9% (15 of 19 patients) and a specificity of 60% (12 of 20 patients). The added value of ADC histogram analysis was 52.6% over MR imaging alone and 15.8% over combined conventional MR imaging and (18)F-FDG PET/CT. Adding ADC histogram analysis to conventional MR imaging can improve the detection sensitivity for occult palatine tonsil SCC in patients with a cervical nodal metastasis originating from a cancer of an unknown primary site. © RSNA, 2015.
Danoix, F; Grancher, G; Bostel, A; Blavette, D
2007-09-01
Atom probe is a very powerful instrument to measure concentrations on a sub nanometric scale [M.K. Miller, G.D.W. Smith, Atom Probe Microanalysis, Principles and Applications to Materials Problems, Materials Research Society, Pittsburgh, 1989]. Atom probe is therefore a unique tool to study and characterise finely decomposed metallic materials. Composition profiles or 3D mapping can be realised by gathering elemental composition measurements. As the detector efficiency is generally not equal to 1, the measured compositions are only estimates of actual values. The variance of the estimates depends on which information is to be estimated. It can be calculated when the detection process is known. These two papers are devoted to give complete analytical derivation and expressions of the variance on composition measurements in several situations encountered when using atom probe. In the first paper, we will concentrate on the analytical derivation of the variance when estimation of compositions obtained from a conventional one dimension (1D) atom probe is considered. In particular, the existing expressions, and the basic hypotheses on which they rely, will be reconsidered, and complete analytical demonstrations established. In the second companion paper, the case of 3D atom probe will be treated, highlighting how the knowledge of the 3D position of detected ions modifies the analytical derivation of the variance of local composition data.
Sheehan, Frances T; Borotikar, Bhushan S; Behnam, Abrahm J; Alter, Katharine E
2012-07-01
A potential source of patellofemoral pain, one of the most common problems of the knee, is believed to be altered patellofemoral kinematics due to a force imbalance around the knee. Although no definitive etiology for this imbalance has been found, a weak vastus medialis is considered a primary factor. Therefore, this study's purpose was to determine how the loss of vastus medialis obliquus force alters three-dimensional in vivo knee joint kinematics during a volitional extension task. Eighteen asymptomatic female subjects with no history of knee pain or pathology participated in this IRB approved study. Patellofemoral and tibiofemoral kinematics were derived from velocity data acquired using dynamic cine-phase contrast MRI. The same kinematics were then acquired immediately after administering a motor branch block to the vastus medialis obliquus using 3-5ml of 1% lidocaine. A repeated measures analysis of variance was used to test the null hypothesis that the post- and pre-injection kinematics were no different. The null hypothesis was rejected for patellofemoral lateral shift (P=0.003, max change=1.8mm, standard deviation=1.7mm), tibiofemoral lateral shift (P<0.001, max change=2.1mm, standard deviation=2.9mm), and tibiofemoral external rotation (P<0.001, max change=3.7°, standard deviation=4.4°). The loss of vastus medialis obliquus function produced kinematic changes that mirrored the axial plane kinematics seen in individuals with patellofemoral pain, but could not account for the full extent of these changes. Thus, vastus medialis weakness is likely a major factor in, but not the sole source of, altered patellofemoral kinematics in such individuals. Published by Elsevier Ltd.
VARIANCE OF MICROSOMAL PROTEIN AND ...
Differences in the pharmacokinetics of xenobiotics among humans makes them differentially susceptible to risk. Differences in enzyme content can mediate pharmacokinetic differences. Microsomal protein is often isolated fromliver to characterize enzyme content and activity, but no measures exist to extrapolate these data to the intact liver. Measures were developed from up to 60 samples of adult human liver to characterize the content of microsomal protein and cytochrome P450 (CYP) enzymes. Statistical evaluations are necessary to estimate values far from the mean value. Adult human liver contains 52.9 - 1.476 mg microsomal protein per g; 2587 - 1.84 pmoles CYP2E1 per g; and 5237 - 2.214 pmols CYP3A per g (geometric mean - geometric standard deviation). These values are useful for identifying and testing susceptibility as a function of enzyme content when used to extrapolate in vitro rates of chemical metabolism for input to physiologically based pharmacokinetic models which can then be exercised to quantify the effect of variance in enzyme expression on risk-relevant pharmacokinetic outcomes.
Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.
Malkin, Zinovy
2016-04-01
The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.
Genetic Engineering of Optical Properties of Biomaterials
NASA Astrophysics Data System (ADS)
Gourley, Paul; Naviaux, Robert; Yaffe, Michael
2008-03-01
Baker's yeast cells are easily cultured and can be manipulated genetically to produce large numbers of bioparticles (cells and mitochondria) with controllable size and optical properties. We have recently employed nanolaser spectroscopy to study the refractive index of individual cells and isolated mitochondria from two mutant strains. Results show that biomolecular changes induced by mutation can produce bioparticles with radical changes in refractive index. Wild-type mitochondria exhibit a distribution with a well-defined mean and small variance. In striking contrast, mitochondria from one mutant strain produced a histogram that is highly collapsed with a ten-fold decrease in the mean and standard deviation. In a second mutant strain we observed an opposite effect with the mean nearly unchanged but the variance increased nearly a thousand-fold. Both histograms could be self-consistently modeled with a single, log-normal distribution. The strains were further examined by 2-dimensional gel electrophoresis to measure changes in protein composition. All of these data show that genetic manipulation of cells represents a new approach to engineering optical properties of bioparticles.
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.
Resistance Training Increases the Variability of Strength Test Scores
2009-06-08
standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard
Quan, Hui; Zhang, Ji
2003-09-15
Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.
7 CFR 400.204 - Notification of deviation from standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...
Ch Miliaresis, George
2016-06-01
A method is presented for elevation (H) and spatial position (X, Y) decorrelation stretch of annual precipitation summaries on a 1-km grid for SW USA for the period 2003 to 2014. Multiple linear regression analysis of the first and second principal component (PC) quantifies the variance in the multi-temporal precipitation imagery that is explained by X, Y, and elevation (h). The multi-temporal dataset is reconstructed from the PC1 and PC2 residual images and the later PCs by taking into account the variance that is not related to X, Y, and h. Clustering of the reconstructed precipitation dataset allowed the definition of positive (for example, in Sierra Nevada, Salt Lake City) and negative (for example, in San Joaquin Valley, Nevada, Colorado Plateau) precipitation anomalies. The temporal and spatial patterns defined from the spatially standardized multi-temporal precipitation imagery provide a tool of comparison for regions in different geographic environments according to the deviation from the precipitation amount that they are expected to receive as function of X, Y, and h. Such a standardization allows the definition of less or more sensitive to climatic change regions and gives an insight in the spatial impact of atmospheric circulation that causes the annual precipitation.
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time series to detect when the geophysical signal, here the ground motion, emerges from the noise.
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Deng, Xi; Schröder, Simone; Redweik, Sabine; Wätzig, Hermann
2011-06-01
Gel electrophoresis (GE) is a very common analytical technique for proteome research and protein analysis. Despite being developed decades ago, there is still a considerable need to improve its precision. Using the fluorescence of Colloidal Coomassie Blue -stained proteins in near-infrared (NIR), the major error source caused by the unpredictable background staining is strongly reduced. This result was generalized for various types of detectors. Since GE is a multi-step procedure, standardization of every single step is required. After detailed analysis of all steps, the staining and destaining were identified as the major source of the remaining variation. By employing standardized protocols, pooled percent relative standard deviations of 1.2-3.1% for band intensities were achieved for one-dimensional separations in repetitive experiments. The analysis of variance suggests that the same batch of staining solution should be used for gels of one experimental series to minimize day-to-day variation and to obtain high precision. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Probability of stress-corrosion fracture under random loading.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.
Evidence of Chinese income dynamics and its effects on income scaling law
NASA Astrophysics Data System (ADS)
Xu, Yan; Wang, Yougui; Tao, Xiaobo; Ližbetinová, Lenka
2017-12-01
With personal annual income data of 5 consecutive years (1998-2002) from CHIPS, dynamic characteristics of Chinese income are studied, especially two hypotheses of time reversal symmetry and independent growth rate are tested. In high income regions, an increasing trend of the standard deviation of income growth rate is observed, which means independent growth rate hypothesis fails to hold. This empirical finding is designed as a new mechanism and added into Gibrat's model, which yields a distribution with a power-law tail. Our model's simulation result shows that increasing variance of income growth rates for higher income regions is the key ingredient to get the power-law tail.
Petsch, Harold E.
1979-01-01
Statistical summaries of daily streamflow data for 189 stations west of the Continental Divide in Colorado are presented in this report. Duration tables, high-flow sequence tables, and low-flow sequence tables provide information about daily mean discharge. The mean, variance, standard deviation, skewness, and coefficient of variation are provided for monthly and annual flows. Percentages of average flow are provided for monthly flows and first-order serial-correlation coefficients are provided for annual flows. The text explain the nature and derivation of the data and illustrates applications of the tabulated information by examples. The data may be used by agencies and individuals engaged in water studies. (USGS)
Automated object-based classification of topography from SRTM data
Drăguţ, Lucian; Eisank, Clemens
2012-01-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060
NASA Astrophysics Data System (ADS)
Witt, Thomas J.; Fletcher, N. E.
2010-10-01
We investigate some statistical properties of ac voltages from a white noise source measured with a digital lock-in amplifier equipped with finite impulse response output filters which introduce correlations between successive voltage values. The main goal of this work is to propose simple solutions to account for correlations when calculating the standard deviation of the mean (SDM) for a sequence of measurement data acquired using such an instrument. The problem is treated by time series analysis based on a moving average model of the filtering process. Theoretical expressions are derived for the power spectral density (PSD), the autocorrelation function, the equivalent noise bandwidth and the Allan variance; all are related to the SDM. At most three parameters suffice to specify any of the above quantities: the filter time constant, the time between successive measurements (both set by the lock-in operator) and the PSD of the white noise input, h0. Our white noise source is a resistor so that the PSD is easily calculated; there are no free parameters. Theoretical expressions are checked against their respective sample estimates and, with the exception of two of the bandwidth estimates, agreement to within 11% or better is found.
Automated object-based classification of topography from SRTM data
NASA Astrophysics Data System (ADS)
Drăguţ, Lucian; Eisank, Clemens
2012-03-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.
Doblas, Sabrina; Almeida, Gilberto S; Blé, François-Xavier; Garteiser, Philippe; Hoff, Benjamin A; McIntyre, Dominick J O; Wachsmuth, Lydia; Chenevert, Thomas L; Faber, Cornelius; Griffiths, John R; Jacobs, Andreas H; Morris, David M; O'Connor, James P B; Robinson, Simon P; Van Beers, Bernard E; Waterton, John C
2015-12-01
To evaluate between-site agreement of apparent diffusion coefficient (ADC) measurements in preclinical magnetic resonance imaging (MRI) systems. A miniaturized thermally stable ice-water phantom was devised. ADC (mean and interquartile range) was measured over several days, on 4.7T, 7T, and 9.4T Bruker, Agilent, and Magnex small-animal MRI systems using a common protocol across seven sites. Day-to-day repeatability was expressed as percent variation of mean ADC between acquisitions. Cross-site reproducibility was expressed as 1.96 × standard deviation of percent deviation of ADC values. ADC measurements were equivalent across all seven sites with a cross-site ADC reproducibility of 6.3%. Mean day-to-day repeatability of ADC measurements was 2.3%, and no site was identified as presenting different measurements than others (analysis of variance [ANOVA] P = 0.02, post-hoc test n.s.). Between-slice ADC variability was negligible and similar between sites (P = 0.15). Mean within-region-of-interest ADC variability was 5.5%, with one site presenting a significantly greater variation than the others (P = 0.0013). Absolute ADC values in preclinical studies are comparable between sites and equipment, provided standardized protocols are employed. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe
2016-08-01
In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.
The performance of the standard rate turn (SRT) by student naval helicopter pilots.
Chapman, F; Temme, L A; Still, D L
2001-04-01
During flight training, student naval helicopter pilots learn the use of flight instruments through a prescribed series of simulator training events. The training simulator is a 6-degrees-of-freedom, motion-based, high-fidelity instrument trainer. From the final basic instrument simulator flights of student pilots, we selected for evaluation and analysis their performance of the Standard Rate Turn (SRT), a routine flight maneuver. The performance of the SRT was scored with air speed, altitude and heading average error from target values and standard deviations. These average errors and standard deviations were used in a Multiple Analysis of Variance (MANOVA) to evaluate the effects of three independent variables: 1) direction of turn (left vs. right), 2) degree of turn (180 vs. 360 degrees); and 3) segment of turn (roll-in, first 30 s, last 30 s, and roll-out of turn). Only the main effects of the three independent variables were significant; there were no significant interactions. This result greatly reduces the number of different conditions that should be scored separately for the evaluation of SRT performance. The results also showed that the magnitude of the heading and altitude errors at the beginning of the SRT correlated with the magnitude of the heading and altitude errors throughout the turn. This result suggests that for the turn to be well executed, it is important for it to begin with little error in these two response parameters. The observations reported here should be considered when establishing SRT performance norms and comparing student scores. Furthermore, it seems easier for pilots to maintain good performance than to correct poor performance.
Inter-individual Differences in the Effects of Aircraft Noise on Sleep Fragmentation.
McGuire, Sarah; Müller, Uwe; Elmenhorst, Eva-Maria; Basner, Mathias
2016-05-01
Environmental noise exposure disturbs sleep and impairs recuperation, and may contribute to the increased risk for (cardiovascular) disease. Noise policy and regulation are usually based on average responses despite potentially large inter-individual differences in the effects of traffic noise on sleep. In this analysis, we investigated what percentage of the total variance in noise-induced awakening reactions can be explained by stable inter-individual differences. We investigated 69 healthy subjects polysomnographically (mean ± standard deviation 40 ± 13 years, range 18-68 years, 32 male) in this randomized, balanced, double-blind, repeated measures laboratory study. This study included one adaptation night, 9 nights with exposure to 40, 80, or 120 road, rail, and/or air traffic noise events (including one noise-free control night), and one recovery night. Mixed-effects models of variance controlling for reaction probability in noise-free control nights, age, sex, number of noise events, and study night showed that 40.5% of the total variance in awakening probability and 52.0% of the total variance in EEG arousal probability were explained by inter-individual differences. If the data set was restricted to nights (4 exposure nights with 80 noise events per night), 46.7% of the total variance in awakening probability and 57.9% of the total variance in EEG arousal probability were explained by inter-individual differences. The results thus demonstrate that, even in this relatively homogeneous, healthy, adult study population, a considerable amount of the variance observed in noise-induced sleep disturbance can be explained by inter-individual differences that cannot be explained by age, gender, or specific study design aspects. It will be important to identify those at higher risk for noise induced sleep disturbance. Furthermore, the custom to base noise policy and legislation on average responses should be re-assessed based on these findings. © 2016 Associated Professional Sleep Societies, LLC.
7 CFR 400.174 - Notification of deviation from financial standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale N; Bonner, Jessie L; Stroujkova, Anastasia
Our objective is to improve seismic event screening using the properties of surface waves, We are accomplishing this through (1) the development of a Love-wave magnitude formula that is complementary to the Russell (2006) formula for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion-generated surface waves. We have applied the M{sub s} (VMAX) analysis (Bonner et al., 2006) using both Love and Rayleigh waves to events in the Middle East and Korean Peninsula, For the Middle East dataset consisting of approximately 100 events, the Love M{sub s} (VMAX) is greater than the Rayleighmore » M{sub s} (VMAX) estimated for individual stations for the majority of the events and azimuths, with the exception of the measurements for the smaller events from European stations to the northeast. It is unclear whether these smaller events suffer from magnitude bias for the Love waves or whether the paths, which include the Caspian and Mediterranean, have variable attenuation for Love and Rayleigh waves. For the Korean Peninsula, we have estimated Rayleigh- and Love-wave magnitudes for 31 earthquakes and two nuclear explosions, including the 25 May 2009 event. For 25 of the earthquakes, the network-averaged Love-wave magnitude is larger than the Rayleigh-wave estimate. For the 2009 nuclear explosion, the Love-wave M{sub s} (VMAX) was 3.1 while the Rayleigh-wave magnitude was 3.6. We are also utilizing the potential of observed variances in M{sub s} estimates that differ significantly in earthquake and explosion populations. We have considered two possible methods for incorporating unequal variances into the discrimination problem and compared the performance of various approaches on a population of 73 western United States earthquakes and 131 Nevada Test Site explosions. The approach proposes replacing the M{sub s} component by M{sub s} + a* {sigma}, where {sigma} denotes the interstation standard deviation obtained from the stations in the sample that produced the M{sub s} value. We replace the usual linear discriminant a* M{sub s}+b*{sub m{sub b}} with a* M{sub s}+b*{sub m{sub b}} + C*{sigma}. In the second approach, we estimate the optimum hybrid linear-quadratic discriminant function resulting from the unequal variance assumption. We observed slight improvement for the discriminant functions resulting from the theoretical interpretations of the unequal variance function. We have also studied the complexity of the ''magnitude spectra'' at each station. Our hypothesis is that explosion spectra should have fewer focal mechanism-produced complexities in the magnitude spectra than earthquakes. We have developed an intrastation ''complexity'' metric {Delta}M{sub s}, where {Delta}M{sub s} = M{sub s}(i)-M{sub s}(i+1) at periods, i, which are between 9 and 25 seconds. The complexity by itself has discriminating power but does not add substantially to the conditional hybrid discriminant that incorporates the differing spreads of the earthquake and explosion standard deviations.« less
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.
Tipton, Elizabeth; Shuster, Jonathan
2017-10-15
Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Luminosity distance in Swiss-cheese cosmology with randomized voids and galaxy halos
NASA Astrophysics Data System (ADS)
Flanagan, Éanna É.; Kumar, Naresh; Wasserman, Ira
2013-08-01
We study the fluctuations in luminosity distance due to gravitational lensing produced both by galaxy halos and large-scale voids. Voids are represented via a “Swiss-cheese” model consisting of a ΛCDM Friedmann-Robertson-Walker background from which a number of randomly distributed, spherical regions of comoving radius 35 Mpc are removed. A fraction of the removed mass is then placed on the shells of the spheres, in the form of randomly located halos. The halos are assumed to be nonevolving and are modeled with Navarro-Frenk-White profiles of a fixed mass. The remaining mass is placed in the interior of the spheres, either smoothly distributed or as randomly located halos. We compute the distribution of magnitude shifts using a variant of the method of Holz and Wald [Phys. Rev. D 58, 063501 (1998)], which includes the effect of lensing shear. In the two models we consider, the standard deviation of this distribution is 0.065 and 0.072 magnitudes and the mean is -0.0010 and -0.0013 magnitudes, for voids of radius 35 Mpc and the sources at redshift 1.5, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation due to voids and halos is a factor ˜3 larger than that due to 35 Mpc voids alone with a 1 Mpc shell thickness, which we studied in our previous work. We also study the effect of the existence of evacuated voids, by comparing to a model where all the halos are randomly distributed in the interior of the sphere with none on its surface. This does not significantly change the variance but does significantly change the demagnification tail. To a good approximation, the variance of the distribution depends only on the mean column density of halos (halo mass divided by its projected area), the concentration parameter of the halos, and the fraction of the mass density that is in the form of halos (as opposed to smoothly distributed); it is independent of how the halos are distributed in space. We derive an approximate analytic formula for the variance that agrees with our numerical results to ≲20% out to z≃1.5, and that can be used to study the dependence on halo parameters.
Fricker, Geoffrey A; Wolf, Jeffrey A; Saatchi, Sassan S; Gillespie, Thomas W
2015-10-01
There is an increasing interest in identifying theories, empirical data sets, and remote-sensing metrics that can quantify tropical forest alpha diversity at a landscape scale. Quantifying patterns of tree species richness in the field is time consuming, especially in regions with over 100 tree species/ha. We examine species richness in a 50-ha plot in Barro Colorado Island in Panama and test if biophysical measurements of canopy reflectance from high-resolution satellite imagery and detailed vertical forest structure and topography from light detection and ranging (lidar) are associated with species richness across four tree size classes (>1, 1-10, >10, and >20 cm dbh) and three spatial scales (1, 0.25, and 0.04 ha). We use the 2010 tree inventory, including 204,757 individuals belonging to 301 species of freestanding woody plants or 166 ± 1.5 species/ha (mean ± SE), to compare with remote-sensing data. All remote-sensing metrics became less correlated with species richness as spatial resolution decreased from 1.0 ha to 0.04 ha and tree size increased from 1 cm to 20 cm dbh. When all stems with dbh > 1 cm in 1-ha plots were compared to remote-sensing metrics, standard deviation in canopy reflectance explained 13% of the variance in species richness. The standard deviations of canopy height and the topographic wetness index (TWI) derived from lidar were the best metrics to explain the spatial variance in species richness (15% and 24%, respectively). Using multiple regression models, we made predictions of species richness across Barro Colorado Island (BCI) at the 1-ha spatial scale for different tree size classes. We predicted variation in tree species richness among all plants (adjusted r² = 0.35) and trees with dbh > 10 cm (adjusted r² = 0.25). However, the best model results were for understory trees and shrubs (dbh 1-10 cm) (adjusted r² = 0.52) that comprise the majority of species richness in tropical forests. Our results indicate that high-resolution remote sensing can predict a large percentage of variance in species richness and potentially provide a framework to map and predict alpha diversity among trees in diverse tropical forests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
Schroder, L.J.; Brooks, M.H.; Malo, B.A.; Willoughby, T.C.
1986-01-01
Five intersite comparison studies for the field determination of pH and specific conductance, using simulated-precipitation samples, were conducted by the U.S.G.S. for the National Atmospheric Deposition Program and National Trends Network. These comparisons were performed to estimate the precision of pH and specific conductance determinations made by sampling-site operators. Simulated-precipitation samples were prepared from nitric acid and deionized water. The estimated standard deviation for site-operator determination of pH was 0.25 for pH values ranging from 3.79 to 4.64; the estimated standard deviation for specific conductance was 4.6 microsiemens/cm at 25 C for specific-conductance values ranging from 10.4 to 59.0 microsiemens/cm at 25 C. Performance-audit samples with known analyte concentrations were prepared by the U.S.G.S.and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The differences between the National Atmospheric Deposition Program and national Trends Network-reported analyte concentrations and known analyte concentrations were calculated, and the bias and precision were determined. For 1983, concentrations of calcium, magnesium, sodium, and chloride were biased at the 99% confidence limit; concentrations of potassium and sulfate were unbiased at the 99% confidence limit. Four analytical laboratories routinely analyzing precipitation were evaluated in their analysis of identical natural- and simulated precipitation samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple-range test on data produced by these laboratories, from the analysis of identical simulated-precipitation samples. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Interlaboratory comparability results may be used to normalize natural-precipitation chemistry data obtained from two or more of these laboratories. (Author 's abstract)
1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...
ERIC Educational Resources Information Center
Guo, Jiin-Huarng; Luh, Wei-Ming
2008-01-01
This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…
Lantry, B.F.; Rudstam, L. G.; Forney, J.L.; VanDeValk, A.J.; Mills, E.L.; Stewart, D.J.; Adams, J.V.
2008-01-01
Daily consumption was estimated from the stomach contents of walleyes Sander vitreus collected weekly from Oneida Lake, New York, during June-October 1975, 1992, 1993, and 1994 for one to four age-groups per year. Field rations were highly variable between weeks, and trends in ration size varied both seasonally and annually. The coefficient of variation for weekly field rations within years and ages ranged from 45% to 97%. Field estimates were compared with simulated consumption from a bioenergetics model. The simulation averages of daily ration deviated from those of the field estimates by -20.1% to +70.3%, with a mean across all simulations of +14.3%. The deviations for each time step were much greater than those for the simulation averages, ranging from -92.8% to +363.6%. A systematic trend in the deviations was observed, the model producing overpredictions at rations less than 3.7% of body weight. Analysis of variance indicated that the deviations were affected by sample year and week but not age. Multiple linear regression using backwards selection procedures and Akaike's information criterion indicated that walleye weight, walleye growth, lake temperature, prey energy density, and the proportion of gizzard shad Dorosoma cepedianum in the diet significantly affected the deviations between simulated and field rations and explained 32% of the variance. ?? Copyright by the American Fisheries Society 2008.
Inertial measurements of free-living activities: assessing mobility to predict falls.
Wang, Kejia; Lovell, Nigel H; Del Rosario, Michael B; Liu, Ying; Wang, Jingjing; Narayanan, Michael R; Brodie, Matthew A D; Delbaere, Kim; Menant, Jasmine; Lord, Stephen R; Redmond, Stephen J
2014-01-01
An exploratory analysis was conducted into how simple features, from acceleration at the lower back and ankle during simulated free-living walking, stair ascent and descent, correlate with age, the overall fall risk from a clinically validated Physiological Profile Assessment (PPA), and its sub-components. Inertial data were captured from 92 older adults aged 78-95 (42 female, mean age 84.1, standard deviation 3.9 years). The dominant frequency, peak width from Welch's power spectral density estimate, and signal variance along each axis, from each sensor location and for each activity were calculated. Several correlations were found between these features and the physiological risk factors. The strongest correlations were from the dominant frequency at the ankle along the mediolateral direction during stair ascent (Spearman's correlation coefficient p = - 0.45) with anterioposterior sway, and signal variance of the anterioposterior acceleration at the lower back during stair descent (p = - 0.45) with age. These findings should aid future attempts to classify activities and predict falls in older adults, based on true free-living data from a range of activities.
Thermal anomaly mapping from night MODIS imagery of USA, a tool for environmental assessment.
Miliaresis, George Ch
2013-02-01
A method is presented for elevation, latitude and longitude decorrelation stretch of multi-temporal MODIS MYD11C3 imagery (monthly average night land surface temperature (LST) across USA and Mexico). Multiple linear regression analysis of principal components images (PCAs) quantifies the variance explained by elevation (H), latitude (LAT), and longitude (LON). The multi-temporal LST imagery is reconstructed from the residual images and selected PCAs by taking into account the portion of variance that is not related to H, LAT, LON. The reconstructed imagery presents the magnitude the standardized LST value per pixel deviates from the H, LAT, LON predicted. LST anomaly is defined as a region that presents either positive or negative reconstructed LST value. The environmental assessment of USA indicated that only for the 25 % of the study area (Mississippi drainage basin), the LST is predicted by the H, LAT, LON. Regions with milled climatic pattern were identified in the West Coast while the coldest climatic pattern is observed for Mid USA. Positive season invariant LST anomalies are identified in SW (Arizona, Sierra Nevada, etc.) and NE USA.
Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide
1981-02-01
SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Nonstationary stochastic charge fluctuations of a dust particle in plasmas.
Shotorban, B
2011-06-01
Stochastic charge fluctuations of a dust particle that are due to discreteness of electrons and ions in plasmas can be described by a one-step process master equation [T. Matsoukas and M. Russell, J. Appl. Phys. 77, 4285 (1995)] with no exact solution. In the present work, using the system size expansion method of Van Kampen along with the linear noise approximation, a Fokker-Planck equation with an exact Gaussian solution is developed by expanding the master equation. The Gaussian solution has time-dependent mean and variance governed by two ordinary differential equations modeling the nonstationary process of dust particle charging. The model is tested via the comparison of its results to the results obtained by solving the master equation numerically. The electron and ion currents are calculated through the orbital motion limited theory. At various times of the nonstationary process of charging, the model results are in a very good agreement with the master equation results. The deviation is more significant when the standard deviation of the charge is comparable to the mean charge in magnitude.
Back in the saddle: large-deviation statistics of the cosmic log-density field
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.
2016-08-01
We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.
Basic life support: evaluation of learning using simulation and immediate feedback devices1.
Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi
2017-10-30
to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.
Normative morphometric data for cerebral cortical areas over the lifetime of the adult human brain.
Potvin, Olivier; Dieumegarde, Louis; Duchesne, Simon
2017-08-01
Proper normative data of anatomical measurements of cortical regions, allowing to quantify brain abnormalities, are lacking. We developed norms for regional cortical surface areas, thicknesses, and volumes based on cross-sectional MRI scans from 2713 healthy individuals aged 18 to 94 years using 23 samples provided by 21 independent research groups. The segmentation was conducted using FreeSurfer, a widely used and freely available automated segmentation software. Models predicting regional cortical estimates of each hemisphere were produced using age, sex, estimated total intracranial volume (eTIV), scanner manufacturer, magnetic field strength, and interactions as predictors. The explained variance for the left/right cortex was 76%/76% for surface area, 43%/42% for thickness, and 80%/80% for volume. The mean explained variance for all regions was 41% for surface areas, 27% for thicknesses, and 46% for volumes. Age, sex and eTIV predicted most of the explained variance for surface areas and volumes while age was the main predictors for thicknesses. Scanner characteristics generally predicted a limited amount of variance, but this effect was stronger for thicknesses than surface areas and volumes. For new individuals, estimates of their expected surface area, thickness and volume based on their characteristics and the scanner characteristics can be obtained using the derived formulas, as well as Z score effect sizes denoting the extent of the deviation from the normative sample. Models predicting normative values were validated in independent samples of healthy adults, showing satisfactory validation R 2 . Deviations from the normative sample were measured in individuals with mild Alzheimer's disease and schizophrenia and expected patterns of deviations were observed. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang
2015-01-01
A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268
Blood lead levels and risk factors in pregnant women from Durango, Mexico.
La-Llave-León, Osmel; Estrada-Martínez, Sergio; Manuel Salas-Pacheco, José; Peña-Elósegui, Rocío; Duarte-Sustaita, Jaime; Candelas Rangel, Jorge-Luís; García Vargas, Gonzalo
2011-01-01
In this cross-sectional study the authors determined blood lead levels (BLLs) and some risk factors for lead exposure in pregnant women. Two hundred ninety-nine pregnant women receiving medical attention by the Secretary of Health, State of Durango, Mexico, participated in this study between 2007 and 2008. BLLs were evaluated with graphite furnace atomic absorption spectrometry. The authors used Student t test, 1-way analysis of variance (ANOVA), and linear regression as statistical treatments. BLLs ranged from 0.36 to 23.6 μg/dL (mean = 2.79 μg/dL, standard deviation = 2.14). Multivariate analysis showed that the main predictors of BLLs were working in a place where lead is used, using lead glazed pottery, and eating soil.
Petsch, Harold E.
1979-01-01
Statistical summaries of daily streamflow data for 246 stations east of the Continental Divide in Colorado and adjacent States are presented in this report. Duration tables, high-flow sequence tables, and low-flow sequence tables provide information about daily mean discharge. The mean, variance, standard deviation, skewness, and coefficient of variation are provided for monthly and annual flows. Percentages of average flow are provided for monthly flows and first-order serial-correlation coefficients are provided for annual flows. The text explains the nature and derivation of the data and illustrates applications of the tabulated information by examples. The data may be used by agencies and individuals engaged in water studies. (USGS)
Malay public attitudes toward epilepsy (PATE) scale: translation and psychometric evaluation.
Lim, Kheng Seang; Choo, Wan Yuen; Wu, Cathie; Tan, Chong Tin
2013-11-01
None of the quantitative scales for public attitudes toward epilepsy had been translated to Malay language. This study aimed to translate and test the validity and reliability of a Malay version of the Public Attitudes Toward Epilepsy (PATE) scale. The translation was performed according to standard principles and tested in 140 Malay-speaking adults aged more than 18 years for psychometric validation. The items in each domain had similar standard deviations (equal item variance), ranging from 0.90 to 1.00 in the personal domain and from 0.87 to 1.23 in the general domain. The correlation between an item and its domain was 0.4 and above for all items and was higher than the correlation with the other domain. Multitrait analysis showed that the Malay PATE had a similar variance, floor and ceiling effects, and relative relationship between the domains as the original PATE. The Malay PATE scale showed a similar correlation with almost all demographic variables except age. Item means were generally clustered in the factor analysis as the hypothesized domains, except those for items 1 and 2. The Cronbach's α values were within acceptable range (0.757 and 0.716 for the general and personal domains, respectively). The Malay PATE scale is a validated and reliable translated version for measuring public attitudes toward epilepsy. © 2013.
Comparison of Accuracy Between a Conventional and Two Digital Intraoral Impression Techniques.
Malik, Junaid; Rodriguez, Jose; Weisbloom, Michael; Petridis, Haralampos
To compare the accuracy (ie, precision and trueness) of full-arch impressions fabricated using either a conventional polyvinyl siloxane (PVS) material or one of two intraoral optical scanners. Full-arch impressions of a reference model were obtained using addition silicone impression material (Aquasil Ultra; Dentsply Caulk) and two optical scanners (Trios, 3Shape, and CEREC Omnicam, Sirona). Surface matching software (Geomagic Control, 3D Systems) was used to superimpose the scans within groups to determine the mean deviations in precision and trueness (μm) between the scans, which were calculated for each group and compared statistically using one-way analysis of variance with post hoc Bonferroni (trueness) and Games-Howell (precision) tests (IBM SPSS ver 24, IBM UK). Qualitative analysis was also carried out from three-dimensional maps of differences between scans. Means and standard deviations (SD) of deviations in precision for conventional, Trios, and Omnicam groups were 21.7 (± 5.4), 49.9 (± 18.3), and 36.5 (± 11.12) μm, respectively. Means and SDs for deviations in trueness were 24.3 (± 5.7), 87.1 (± 7.9), and 80.3 (± 12.1) μm, respectively. The conventional impression showed statistically significantly improved mean precision (P < .006) and mean trueness (P < .001) compared to both digital impression procedures. There were no statistically significant differences in precision (P = .153) or trueness (P = .757) between the digital impressions. The qualitative analysis revealed local deviations along the palatal surfaces of the molars and incisal edges of the anterior teeth of < 100 μm. Conventional full-arch PVS impressions exhibited improved mean accuracy compared to two direct optical scanners. No significant differences were found between the two digital impression methods.
Macedo Ribeiro, Ana Freire; Bergmann, Anke; Lemos, Thiago; Pacheco, Antônio Guilherme; Mello Russo, Maitê; Santos de Oliveira, Laura Alice; de Carvalho Rodrigues, Erika
The main objective of this study was to review the literature to identify reference values for angles and distances of body segments related to upright posture in healthy adult women with the Postural Assessment Software (PAS/SAPO). Electronic databases (BVS, PubMed, SciELO and Scopus) were assessed using the following descriptors: evaluation, posture, photogrammetry, physical therapy, postural alignment, postural assessment, and physiotherapy. Studies that performed postural evaluation in healthy adult women with PAS/SAPO and were published in English, Portuguese and Spanish, between the years 2005 and 2014 were included. Four studies met the inclusion criteria. Data from the included studies were grouped to establish the statistical descriptors (mean, variance, and standard deviation) of the body angles and distances. A total of 29 variables were assessed (10 in the anterior views, 16 in the lateral right and left views, and 3 in the posterior views), and its respective mean and standard deviation were calculated. Reference values for the anterior and posterior views showed no symmetry between the right and left sides of the body in the frontal plane. There were also small differences in the calculated reference values for the lateral view. The proposed reference values for quantitative evaluation of the upright posture in healthy adult women estimated in the present study using PAS/SAPO could guide future studies and help clinical practice. Copyright © 2017. Published by Elsevier Inc.
Female scarcity reduces women's marital ages and increases variance in men's marital ages.
Kruger, Daniel J; Fitzgerald, Carey J; Peterson, Tom
2010-08-05
When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.
Validated method for quantification of genetically modified organisms in samples of maize flour.
Kunert, Renate; Gach, Johannes S; Vorauer-Uhl, Karola; Engel, Edwin; Katinger, Hermann
2006-02-08
Sensitive and accurate testing for trace amounts of biotechnology-derived DNA from plant material is the prerequisite for detection of 1% or 0.5% genetically modified ingredients in food products or raw materials thereof. Compared to ELISA detection of expressed proteins, real-time PCR (RT-PCR) amplification has easier sample preparation and detection limits are lower. Of the different methods of DNA preparation CTAB method with high flexibility in starting material and generation of sufficient DNA with relevant quality was chosen. Previous RT-PCR data generated with the SYBR green detection method showed that the method is highly sensitive to sample matrices and genomic DNA content influencing the interpretation of results. Therefore, this paper describes a real-time DNA quantification based on the TaqMan probe method, indicating high accuracy and sensitivity with detection limits of lower than 18 copies per sample applicable and comparable to highly purified plasmid standards as well as complex matrices of genomic DNA samples. The results were evaluated with ValiData for homology of variance, linearity, accuracy of the standard curve, and standard deviation.
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
Uchmanowicz, Izabella; Jankowska-Polańska, Beata; Chudiak, Anna; Szymańska-Chabowska, Anna; Mazur, Grzegorz
2016-05-10
Development of simple instruments for the determination of the level of adherence in patients with high blood pressure is the subject of ongoing research. One such instrument, gaining growing popularity worldwide, is the Hill-Bone Compliance to High Blood Pressure Therapy. The aim of this study was to adapt and to test the reliability of the Polish version of Hill-Bone Compliance to High Blood Pressure Therapy Scale. A standard guideline was used for the translation and cultural adaptation of the English version of the Hill-Bone Compliance to High Blood Pressure Therapy Scale into Polish. The study included 117 Polish patients with hypertension aged between 27 and 90 years, among them 53 men and 64 women. Cronbach's alpha was used for analysing the internal consistency of the scale. The mean score in the reduced sodium intake subscale was M = 5.7 points (standard deviation SD = 1.6 points). The mean score in the appointment-keeping subscale was M = 3.4 points (standard deviation SD = 1.4 points). The mean score in the medication-taking subscale was M = 11.6 points (standard deviation SD = 3.3 points). In the principal component analysis, the three-factor system (1 - medication-taking, 2 - appointment-keeping, 3 - reduced sodium intake) accounted for 53 % of total variance. All questions had factor loadings > 0.4. The medication-taking subscale: most questions (6 out of 9) had the highest loadings with Factor 1. The appointment-keeping subscale: all questions (2 out of 2) had the highest loadings with Factor 2. The reduced sodium intake subscale: most questions (2 out of 3) had the highest loadings with Factor 3. Goodness of fit was tested at chi(2) = 248.87; p < 0.001. The Cronbach's alpha score for the entire questionnaire was 0.851. The Hill-Bone Compliance to High Blood Pressure Therapy Scale proved to be suitable for use in the Polish population. Use of this screening tool for the assessment of adherence to BP treatment is recommended.
Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B
2017-10-01
In the perioperative period, anesthesiologists and postanesthesia care unit (PACU) nurses routinely prepare and administer small-volume IV injections, yet the accuracy of delivered medication volumes in this setting has not been described. In this ex vivo study, we sought to characterize the degree to which small-volume injections (≤0.5 mL) deviated from the intended injection volumes among a group of pediatric anesthesiologists and pediatric postanesthesia care unit (PACU) nurses. We hypothesized that as the intended injection volumes decreased, the deviation from those intended injection volumes would increase. Ten attending pediatric anesthesiologists and 10 pediatric PACU nurses each performed a series of 10 injections into a simulated patient IV setup. Practitioners used separate 1-mL tuberculin syringes with removable 18-gauge needles (Becton-Dickinson & Company, Franklin Lakes, NJ) to aspirate 5 different volumes (0.025, 0.05, 0.1, 0.25, and 0.5 mL) of 0.25 mM Lucifer Yellow (LY) fluorescent dye constituted in saline (Sigma Aldrich, St. Louis, MO) from a rubber-stoppered vial. Each participant then injected the specified volume of LY fluorescent dye via a 3-way stopcock into IV tubing with free-flowing 0.9% sodium chloride (10 mL/min). The injected volume of LY fluorescent dye and 0.9% sodium chloride then drained into a collection vial for laboratory analysis. Microplate fluorescence wavelength detection (Infinite M1000; Tecan, Mannedorf, Switzerland) was used to measure the fluorescence of the collected fluid. Administered injection volumes were calculated based on the fluorescence of the collected fluid using a calibration curve of known LY volumes and associated fluorescence.To determine whether deviation of the administered volumes from the intended injection volumes increased at lower injection volumes, we compared the proportional injection volume error (loge [administered volume/intended volume]) for each of the 5 injection volumes using a linear regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P < .0018). The exploratory analysis revealed no significant difference in the standard deviations of the log proportional errors for injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.
Performance in physical examination on the USMLE Step 2 Clinical Skills examination.
Peitzman, Steven J; Cuddy, Monica M
2015-02-01
To provide descriptive information about history-taking (HX) and physical examination (PE) performance for U.S. medical students as documented by standardized patients (SPs) during the Step 2 Clinical Skills (CS) component of the United States Medical Licensing Examination. The authors examined two hypotheses: (1) Students perform worse in PE compared with HX, and (2) for PE, students perform worse in the musculoskeletal system and neurology compared with other clinical domains. The sample included 121,767 student-SP encounters based on 29,442 examinees from U.S. medical schools who took Step 2 CS for the first time in 2011. The encounters comprised 107 clinical presentations, each categorized into one of five clinical domains: cardiovascular, gastrointestinal, musculoskeletal, neurological, and respiratory. The authors compared mean percent-correct scores for HX and PE via a one-tailed paired-samples t test and examined mean score differences by clinical domain using analysis of variance techniques. Average PE scores (59.6%) were significantly lower than average HX scores (78.1%). The range of scores for PE (51.4%-72.7%) was larger than for HX (74.4%-81.0%), and the standard deviation for PE scores (28.3) was twice as large as the HX standard deviation (14.7). PE performance was significantly weaker for musculoskeletal and neurological encounters compared with other encounters. U.S. medical students perform worse on PE than HX; PE performance was weakest in musculoskeletal and neurology clinical domains. Findings may reflect imbalances in U.S. medical education, but more research is needed to fully understand the relationships among PE instruction, assessment, and proficiency.
Vitezica, Zulma G; Varona, Luis; Elsen, Jean-Michel; Misztal, Ignacy; Herring, William; Legarra, Andrès
2016-01-29
Most developments in quantitative genetics theory focus on the study of intra-breed/line concepts. With the availability of massive genomic information, it becomes necessary to revisit the theory for crossbred populations. We propose methods to construct genomic covariances with additive and non-additive (dominance) inheritance in the case of pure lines and crossbred populations. We describe substitution effects and dominant deviations across two pure parental populations and the crossbred population. Gene effects are assumed to be independent of the origin of alleles and allelic frequencies can differ between parental populations. Based on these assumptions, the theoretical variance components (additive and dominant) are obtained as a function of marker effects and allelic frequencies. The additive genetic variance in the crossbred population includes the biological additive and dominant effects of a gene and a covariance term. Dominance variance in the crossbred population is proportional to the product of the heterozygosity coefficients of both parental populations. A genomic BLUP (best linear unbiased prediction) equivalent model is presented. We illustrate this approach by using pig data (two pure lines and their cross, including 8265 phenotyped and genotyped sows). For the total number of piglets born, the dominance variance in the crossbred population represented about 13 % of the total genetic variance. Dominance variation is only marginally important for litter size in the crossbred population. We present a coherent marker-based model that includes purebred and crossbred data and additive and dominant actions. Using this model, it is possible to estimate breeding values, dominant deviations and variance components in a dataset that comprises data on purebred and crossbred individuals. These methods can be exploited to plan assortative mating in pig, maize or other species, in order to generate superior crossbred individuals in terms of performance.
Schroeder, A A; Ford, N L; Coil, J M
2017-03-01
To determine whether post space preparation deviated from the root canal preparation in canals filled with Thermafil, GuttaCore or warm vertically compacted gutta-percha. Forty-two extracted human permanent maxillary lateral incisors were decoronated, and their root canals instrumented using a standardized protocol. Samples were divided into three groups and filled with Thermafil (Dentsply Tulsa Dental Specialties, Johnson City, TN, USA), GuttaCore (Dentsply Tulsa Dental Specialties) or warm vertically compacted gutta-percha, before post space preparation was performed with a GT Post drill (Dentsply Tulsa Dental Specialties). Teeth were scanned using micro-computed tomography after root filling and again after post space preparation. Scans were examined for number of samples with post space deviation, linear deviation of post space preparation and minimum root thickness before and after post space preparation. Parametric data were analysed with one-way analysis of variance (anova) or one-tailed paired Student's t-tests, whilst nonparametric data were analysed with Fisher's exact test. Deviation occurred in eight of forty-two teeth (19%), seven of fourteen from the Thermafil group (50%), one of fourteen from the GuttaCore group (7%), and none from the gutta-percha group. Deviation occurred significantly more often in the Thermafil group than in each of the other two groups (P < 0.05). Linear deviation of post space preparation was greater in the Thermafil group than in both of the other groups and was significantly greater than that of the gutta-percha group (P < 0.05). Minimum root thickness before post space preparation was significantly greater than it was after post space preparation for all groups (P < 0.01). The differences between the Thermafil, GuttaCore and gutta-percha groups in the number of samples with post space deviation and in linear deviation of post space preparation were associated with the presence or absence of a carrier as well as the different carrier materials. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T
2016-05-15
Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Colpa, Linda; Chandrakumar, Manokaraananthan; Wong, Agnes M F
2017-02-01
Our previous work has shown that amblyopia disrupts the planning and execution of visually-guided saccadic and reaching movements. We investigated the association between the clinical features of amblyopia and aspects of visuomotor behavior that are disrupted by amblyopia. A total of 55 adults with amblyopia (22 anisometropic, 18 strabismic, 15 mixed mechanism), 14 adults with strabismus without amblyopia, and 22 visually-normal control participants completed a visuomotor task while their eye and hand movements were recorded. Univariate and multivariate analyses were performed to assess the association between three clinical predictors of amblyopia (amblyopic eye [AE] acuity, stereo sensitivity, and eye deviation) and seven kinematic outcomes, including saccadic and reach latency, interocular saccadic and reach latency difference, saccadic and reach precision, and PA/We ratio (an index of reach control strategy efficacy using online feedback correction). Amblyopic eye acuity explained 28% of the variance in saccadic latency, and 48% of the variance in mean saccadic latency difference between the amblyopic and fellow eyes (i.e., interocular latency difference). In contrast, for reach latency, AE acuity explained only 10% of the variance. Amblyopic eye acuity was associated with reduced endpoint saccadic (23% of variance) and reach (22% of variance) precision in the amblyopic group. In the strabismus without amblyopia group, stereo sensitivity and eye deviation did not explain any significant variance in saccadic and reach latency or precision. Stereo sensitivity was the best clinical predictor of deficits in reach control strategy, explaining 23% of total variance of PA/We ratio in the amblyopic group and 12% of variance in the strabismus without amblyopia group when viewing with the amblyopic/nondominant eye. Deficits in eye and limb movement initiation (latency) and target localization (precision) were associated with amblyopic acuity deficit, whereas changes in the sensorimotor reach strategy were associated with deficits in stereopsis. Importantly, more than 50% of variance was not explained by the measured clinical features. Our findings suggest that other factors, including higher order visual processing and attention, may have an important role in explaining the kinematic deficits observed in amblyopia.
Comparing Standard Deviation Effects across Contexts
ERIC Educational Resources Information Center
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
A Database of Woody Vegetation Responses to Elevated Atmospheric CO2 (NDP-072)
Curtis, Peter S [The Ohio State Univ., Columbus, OH (United States); Cushman, Robert M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brenkert, Antoinette L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
1999-01-01
To perform a statistically rigorous meta-analysis of research results on the response by woody vegetation to increased atmospheric CO2 levels, a multiparameter database of responses was compiled. Eighty-four independent CO2-enrichment studies, covering 65 species and 35 response parameters, met the necessary criteria for inclusion in the database: reporting mean response, sample size, and variance of the response (either as standard deviation or standard error). Data were retrieved from the published literature and unpublished reports. This numeric data package contains a 29-field data set of CO2-exposure experiment responses by woody plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO2-exposure experiments and specific comments relevant to the data in the data set, and this documentation file (which includes SAS and Fortran codes to read the ASCII data file; SAS is a registered trademark of the SAS Institute, Inc., Cary, North Carolina 27511).
Species differences in hematological values of captive cranes, geese, raptors, and quail
Gee, G.F.; Carpenter, J.W.; Hensler, G.L.
1981-01-01
Hematological and serum chemical constituents of blood were determined for 12 species, including 7 endangered species, of cranes, geese, raptors, and quail in captivity at the Patuxent Wildlife Research Center. Means, standard deviations, analysis of variance by species and sex, and a series of multiple comparisons of means were derived for each parameter investigated. Differences among some species means were observed in all blood parameters except gamma-glutamyl transpeptidase. Although sampled during the reproductively quiescent period, an influence of sex was noted in red blood cell count, hemoglobin, albumin, glucose, cholesterol, serum glutamic oxaloacetic transaminase, Ca, and P. Our data and values reported in literature indicate that most hematological parameters vary among species and, in some cases, according to methods used to determine them. Therefore, baseline data for captive and wild birds should be established by using standard methods, and should be made available to aid others for use in assessing physiological and pathological conditions of these species.
Inter-individual Differences in the Effects of Aircraft Noise on Sleep Fragmentation
McGuire, Sarah; Müller, Uwe; Elmenhorst, Eva-Maria; Basner, Mathias
2016-01-01
Study Objectives: Environmental noise exposure disturbs sleep and impairs recuperation, and may contribute to the increased risk for (cardiovascular) disease. Noise policy and regulation are usually based on average responses despite potentially large inter-individual differences in the effects of traffic noise on sleep. In this analysis, we investigated what percentage of the total variance in noise-induced awakening reactions can be explained by stable inter-individual differences. Methods: We investigated 69 healthy subjects polysomnographically (mean ± standard deviation 40 ± 13 years, range 18–68 years, 32 male) in this randomized, balanced, double-blind, repeated measures laboratory study. This study included one adaptation night, 9 nights with exposure to 40, 80, or 120 road, rail, and/or air traffic noise events (including one noise-free control night), and one recovery night. Results: Mixed-effects models of variance controlling for reaction probability in noise-free control nights, age, sex, number of noise events, and study night showed that 40.5% of the total variance in awakening probability and 52.0% of the total variance in EEG arousal probability were explained by inter-individual differences. If the data set was restricted to nights (4 exposure nights with 80 noise events per night), 46.7% of the total variance in awakening probability and 57.9% of the total variance in EEG arousal probability were explained by inter-individual differences. The results thus demonstrate that, even in this relatively homogeneous, healthy, adult study population, a considerable amount of the variance observed in noise-induced sleep disturbance can be explained by inter-individual differences that cannot be explained by age, gender, or specific study design aspects. Conclusions: It will be important to identify those at higher risk for noise induced sleep disturbance. Furthermore, the custom to base noise policy and legislation on average responses should be re-assessed based on these findings. Citation: McGuire S, Müller U, Elmenhorst EM, Basner M. Inter-individual differences in the effects of aircraft noise on sleep fragmentation. SLEEP 2016;39(5):1107–1110. PMID:26856901
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
NASA Astrophysics Data System (ADS)
Lindborg, Lennart; Lillhök, Jan; Grindborg, Jan-Erik
2015-11-01
The relative standard deviation, σr,D, of calculated multi-event distributions of specific energy for 60Co ϒ rays was reported by the authors F Villegas, N Tilly and A Ahnesjö (Phys. Med. Biol. 58 6149-62). The calculations were made with an upgraded version of the Monte Carlo code PENELOPE. When the results were compared to results derived from experiments with the variance method and simulated tissue equivalent volumes in the micrometre range a difference of about 50% was found. Villegas et al suggest wall-effects as the likely explanation for the difference. In this comment we review some publications on wall-effects and conclude that wall-effects are not a likely explanation.
Lindborg, Lennart; Lillhök, Jan; Grindborg, Jan-Erik
2015-11-07
The relative standard deviation, σr,D, of calculated multi-event distributions of specific energy for (60)Co ϒ rays was reported by the authors F Villegas, N Tilly and A Ahnesjö (Phys. Med. Biol. 58 6149-62). The calculations were made with an upgraded version of the Monte Carlo code PENELOPE. When the results were compared to results derived from experiments with the variance method and simulated tissue equivalent volumes in the micrometre range a difference of about 50% was found. Villegas et al suggest wall-effects as the likely explanation for the difference. In this comment we review some publications on wall-effects and conclude that wall-effects are not a likely explanation.
Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-11-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-09-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated entity... confidential information in reaching a decision on a variance application. Interested members of the public...
Entropy: A new measure of stock market volatility?
NASA Astrophysics Data System (ADS)
Bentes, Sonia R.; Menezes, Rui
2012-11-01
When uncertainty dominates understanding stock market volatility is vital. There are a number of reasons for that. On one hand, substantial changes in volatility of financial market returns are capable of having significant negative effects on risk averse investors. In addition, such changes can also impact on consumption patterns, corporate capital investment decisions and macroeconomic variables. Arguably, volatility is one of the most important concepts in the whole finance theory. In the traditional approach this phenomenon has been addressed based on the concept of standard-deviation (or variance) from which all the famous ARCH type models - Autoregressive Conditional Heteroskedasticity Models- depart. In this context, volatility is often used to describe dispersion from an expected value, price or model. The variability of traded prices from their sample mean is only an example. Although as a measure of uncertainty and risk standard-deviation is very popular since it is simple and easy to calculate it has long been recognized that it is not fully satisfactory. The main reason for that lies in the fact that it is severely affected by extreme values. This may suggest that this is not a closed issue. Bearing on the above we might conclude that many other questions might arise while addressing this subject. One of outstanding importance, from which more sophisticated analysis can be carried out, is how to evaluate volatility, after all? If the standard-deviation has some drawbacks shall we still rely on it? Shall we look for an alternative measure? In searching for this shall we consider the insight of other domains of knowledge? In this paper we specifically address if the concept of entropy, originally developed in physics by Clausius in the XIX century, which can constitute an effective alternative. Basically, what we try to understand is, which are the potentialities of entropy compared to the standard deviation. But why entropy? The answer lies on the fact that there is already some research on the domain of Econophysics, which points out that as a measure of disorder, distance from equilibrium or even ignorance, entropy might present some advantages. However another question arises: since there is several measures of entropy which one since there are several measures of entropy, which one shall be used? As a starting point we discuss the potentialities of Shannon entropy and Tsallis entropy. The main difference between them is that both Renyi and Tsallis are adequate for anomalous systems while Shannon has revealed optimal for equilibrium systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soltz, R. A.; Danagoulian, A.; Sheets, S.
Theoretical calculations indicate that the value of the Feynman variance, Y2F for the emitted distribution of neutrons from ssionable exhibits a strong monotonic de- pendence on a the multiplication, M, of a quantity of special nuclear material. In 2012 we performed a series of measurements at the Passport Inc. facility using a 9- MeV bremsstrahlung CW beam of photons incident on small quantities of uranium with liquid scintillator detectors. For the set of objects studies we observed deviations in the expected monotonic dependence, and these deviations were later con rmed by MCNP simulations. In this report, we modify the theorymore » to account for the contri- bution from the initial photo- ssion and benchmark the new theory with a series of MCNP simulations on DU, LEU, and HEU objects spanning a wide range of masses and multiplication values.« less
A comparison of portfolio selection models via application on ISE 100 index data
NASA Astrophysics Data System (ADS)
Altun, Emrah; Tatlidil, Hüseyin
2013-10-01
Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who cannot... reaching a decision on a variance application. Interested members of the public will be allowed a...
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2012 CFR
2012-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2014 CFR
2014-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2011 CFR
2011-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2013 CFR
2013-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2010 CFR
2010-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.6 - Tolerances for moisture meters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...
A DMAIC approach for process capability improvement an engine crankshaft manufacturing process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, P. Srinivasa
2014-05-01
The define-measure-analyze-improve-control (DMAIC) approach is a five-strata approach, namely DMAIC. This approach is the scientific approach for reducing the deviations and improving the capability levels of the manufacturing processes. The present work elaborates on DMAIC approach applied in reducing the process variations of the stub-end-hole boring operation of the manufacture of crankshaft. This statistical process control study starts with selection of the critical-to-quality (CTQ) characteristic in the define stratum. The next stratum constitutes the collection of dimensional measurement data of the CTQ characteristic identified. This is followed by the analysis and improvement strata where the various quality control tools like Ishikawa diagram, physical mechanism analysis, failure modes effects analysis and analysis of variance are applied. Finally, the process monitoring charts are deployed at the workplace for regular monitoring and control of the concerned CTQ characteristic. By adopting DMAIC approach, standard deviation is reduced from 0.003 to 0.002. The process potential capability index ( C P) values improved from 1.29 to 2.02 and the process performance capability index ( C PK) values improved from 0.32 to 1.45, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinert, Marian; Kratz, Marita; Jones, David B.
2014-10-15
In this paper, we present a system that allows imaging of cartilage tissue via optical coherence tomography (OCT) during controlled uniaxial unconfined compression of cylindrical osteochondral cores in vitro. We describe the system design and conduct a static and dynamic performance analysis. While reference measurements yield a full scale maximum deviation of 0.14% in displacement, force can be measured with a full scale standard deviation of 1.4%. The dynamic performance evaluation indicates a high accuracy in force controlled mode up to 25 Hz, but it also reveals a strong effect of variance of sample mechanical properties on the tracking performancemore » under displacement control. In order to counterbalance these disturbances, an adaptive feed forward approach was applied which finally resulted in an improved displacement tracking accuracy up to 3 Hz. A built-in imaging probe allows on-line monitoring of the sample via OCT while being loaded in the cultivation chamber. We show that cartilage topology and defects in the tissue can be observed and demonstrate the visualization of the compression process during static mechanical loading.« less
Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors
NASA Astrophysics Data System (ADS)
Hasuike, Takashi; Ishii, Hiroaki
2009-01-01
This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.
Antshel, Kevin M.; Hier, Bridget O.; Fremont, Wanda; Faraone, Stephen V.; Kates, Wendy R.
2015-01-01
Background The primary objective of the current study was to examine the childhood predictors of adolescent reading comprehension in velo-cardio-facial syndrome (VCFS). Although much research has focused on mathematics skills among individuals with VCFS, no studies have examined predictors of reading comprehension. Methods 69 late adolescents with VCFS , 23 siblings of youth with VCFS and 30 community controls participated in a longitudinal research project and had repeat neuropsychological test batteries and psychiatric evaluations every 3 years. The Wechsler Individual Achievement Test – 2nd edition (WIAT-II) Reading Comprehension subtest served as our primary outcome variable. Results Consistent with previous research, children and adolescents with VCFS had mean reading comprehension scores on the WIAT-II which were approximately two standard deviations below the mean and word reading scores approximately one standard deviation below the mean. A more novel finding is that relative to both control groups, individuals with VCFS demonstrated a longitudinal decline in reading comprehension abilities yet a slight increase in word reading abilities. In the combined control sample, WISC-III FSIQ, WIAT-II Word Reading, WISC-III Vocabulary and CVLT-C List A Trial 1 accounted for 75% of the variance in Time 3 WIAT-II Reading Comprehension scores. In the VCFS sample, WISC-III FSIQ, BASC-Teacher Aggression, CVLT-C Intrusions, Tower of London, Visual Span Backwards, WCST non-perseverative errors, WIAT-II Word Reading and WISC-III Freedom from Distractibility index accounted for 85% of the variance in Time 3 WIAT-II Reading Comprehension scores. A principal component analysis with promax rotation computed on the statistically significant Time 1 predictor variables in the VCFS sample resulted in three factors: Word reading decoding / Interference control, Self-Control / Self-Monitoring and Working Memory. Conclusions Childhood predictors of late adolescent reading comprehension in VCFS differ in some meaningful ways from predictors in the non-VCFS population. These results offer some guidance for how best to consider intervention efforts to improve reading comprehension in the VCFS population. PMID:24861691
Effects of climate change and variability on population dynamics in a long-lived shorebird.
van de Pol, Martijn; Vindenes, Yngvild; Saether, Bernt-Erik; Engen, Steinar; Ens, Bruno J; Oosterbeek, Kees; Tinbergen, Joost M
2010-04-01
Climate change affects both the mean and variability of climatic variables, but their relative impact on the dynamics of populations is still largely unexplored. Based on a long-term study of the demography of a declining Eurasian Oystercatcher (Haematopus ostralegus) population, we quantify the effect of changes in mean and variance of winter temperature on different vital rates across the life cycle. Subsequently, we quantify, using stochastic stage-structured models, how changes in the mean and variance of this environmental variable affect important characteristics of the future population dynamics, such as the time to extinction. Local mean winter temperature is predicted to strongly increase, and we show that this is likely to increase the population's persistence time via its positive effects on adult survival that outweigh the negative effects that higher temperatures have on fecundity. Interannual variation in winter temperature is predicted to decrease, which is also likely to increase persistence time via its positive effects on adult survival that outweigh the negative effects that lower temperature variability has on fecundity. Overall, a 0.1 degrees C change in mean temperature is predicted to alter median time to extinction by 1.5 times as many years as would a 0.1 degrees C change in the standard deviation in temperature, suggesting that the dynamics of oystercatchers are more sensitive to changes in the mean than in the interannual variability of this climatic variable. Moreover, as climate models predict larger changes in the mean than in the standard deviation of local winter temperature, the effects of future climatic variability on this population's time to extinction are expected to be overwhelmed by the effects of changes in climatic means. We discuss the mechanisms by which climatic variability can either increase or decrease population viability and how this might depend both on species' life histories and on the vital rates affected. This study illustrates that, for making reliable inferences about population consequences in species in which life history changes with age or stage, it is crucial to investigate the impact of climate change on vital rates across the entire life cycle. Disturbingly, such data are unavailable for most species of conservation concern.
Mechanical factors relate to pain in knee osteoarthritis.
Maly, Monica R; Costigan, Patrick A; Olney, Sandra J
2008-07-01
Pain experienced by people with knee osteoarthritis is related to psychosocial factors and damage to articular tissues and/or the pain pathway itself. Mechanical factors have been speculated to trigger this pain experience; yet mechanics have not been identified as a source of pain in this population. The purpose of this study was to identify whether mechanics could explain variance in pain intensity in people with knee osteoarthritis. Data from 53 participants with physician-diagnosed knee osteoarthritis (mean age=68.5 years; standard deviation=8.6 years) were analyzed. Pain intensity was reported on the Western Ontario and McMaster Universities Osteoarthritis Index. Mechanical measures included weight-bearing varus-valgus alignment, body mass index and isokinetic quadriceps torque. Gait analysis captured the range of adduction-abduction angle, range of flexion-extension angle and external knee adduction moment during level walking. Pain intensity was significantly related to the dynamic range of flexion-extension during gait and body mass index. A total of 29% of the variance in pain intensity was explained by mechanical variables. The range of flexion-extension explained 18% of variance in pain intensity. Body mass index added 11% to the model. The knee adduction moment was unrelated to pain intensity. The findings support that mechanical factors are related to knee osteoarthritis pain. Because limitations in flexion-extension range of motion and body size are modifiable factors, future research could examine whether interventions targeting these mechanics would facilitate pain management.
Somatotype, training and performance in Ironman athletes.
Kandel, Michel; Baeyens, Jean Pierre; Clarys, Peter
2014-01-01
The aim of this study was to describe the physiques of Ironman athletes and the relationship between Ironman's performance, training and somatotype. A total of 165 male and 22 female competitors of the Ironman Switzerland volunteered in this study. Ten anthropometric dimensions were measured, and 12 training and history variables were recorded with a questionnaire. The variables were compared with the race performance. The somatotype was a strong predictor of Ironman performance (R=0.535; R(2)=0.286; sign. p<0.001) in male athletes. The endomorphy component was the most substantial predictor. Reductions in endomorphy by one standard deviation as well as an increased ectomorphy value by one standard deviation lead to significant and substantial improvement in Ironman performance (28.1 and 29.8 minutes, respectively). An ideal somatotype of 1.7-4.9-2.8 could be established. Age and quantitative training effort were not significant predictors on Ironman performance. In female athletes, no relationship between somatotype, training and performance was found. The somatotype of a male athlete defines for 28.6% variance in Ironman performance. Athletes not having an ideal somatotype of 1.7-4.9-2.8 could improve their performance by altering their somatotype. Lower rates in endomorphy, as well as higher rates in ectomorphy, resulted in a significant better race performance. The impact of somatotype was the most distinguished on the run discipline and had a much greater impact on the total race time than the quantitative training effort. These findings could not be found in female athletes.
Surveillance of hemodialysis vascular access with ultrasound vector flow imaging
NASA Astrophysics Data System (ADS)
Brandt, Andreas H.; Olesen, Jacob B.; Hansen, Kristoffer L.; Rix, Marianne; Jensen, Jørgen A.; Nielsen, Michael B.
2015-03-01
The aim of this study was prospectively to monitor the volume flow in patients with arteriovenous fistula (AVF) with the angle independent ultrasound technique Vector Flow Imaging (VFI). Volume flow values were compared with Ultrasound dilution technique (UDT). Hemodialysis patients need a well-functioning vascular access with as few complications as possible and preferred vascular access is an AVF. Dysfunction due to stenosis is a common complication, and regular monitoring of volume flow is recommended to preserve AVF patency. UDT is considered the gold standard for volume flow surveillance, but VFI has proven to be more precise, when performing single repeated instantaneous measurements. Three patients with AVF were monitored with UDT and VFI monthly for five months. A commercial ultrasound scanner with a 9 MHz linear array transducer with integrated VFI was used to obtain data. UDT values were obtained with Transonic HD03 Flow-QC Hemodialysis Monitor. Three independent measurements at each scan session were obtained with UDT and VFI each month. Average deviation of volume flow between UDT and VFI was 25.7 % (Cl: 16.7% to 34.7%) (p= 0.73). The standard deviation for all patients, calculated from the mean variance of each individual scan sessions, was 199.8 ml/min for UDT and 47.6 ml/min for VFI (p = 0.002). VFI volume flow values were not significantly different from the corresponding estimates obtained using UDT, and VFI measurements were more precise than UDT. The study indicates that VFI can be used for surveillance of volume flow.
Du, Han; Wang, Lijuan
2018-04-23
Intraindividual variability can be measured by the intraindividual standard deviation ([Formula: see text]), intraindividual variance ([Formula: see text]), estimated hth-order autocorrelation coefficient ([Formula: see text]), and mean square successive difference ([Formula: see text]). Unresolved issues exist in the research on reliabilities of intraindividual variability indicators: (1) previous research only studied conditions with 0 autocorrelations in the longitudinal responses; (2) the reliabilities of [Formula: see text] and [Formula: see text] have not been studied. The current study investigates reliabilities of [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and the intraindividual mean, with autocorrelated longitudinal data. Reliability estimates of the indicators were obtained through Monte Carlo simulations. The impact of influential factors on reliabilities of the intraindividual variability indicators is summarized, and the reliabilities are compared across the indicators. Generally, all the studied indicators of intraindividual variability were more reliable with a more reliable measurement scale and more assessments. The reliabilities of [Formula: see text] were generally lower than those of [Formula: see text] and [Formula: see text], the reliabilities of [Formula: see text] were usually between those of [Formula: see text] and [Formula: see text] unless the scale reliability was large and/or the interindividual standard deviation in autocorrelation coefficients was large, and the reliabilities of the intraindividual mean were generally the highest. An R function is provided for planning longitudinal studies to ensure sufficient reliabilities of the intraindividual indicators are achieved.
Obesity and age at diagnosis of endometrial cancer.
Nevadunsky, Nicole S; Van Arsdale, Anne; Strickler, Howard D; Moadel, Alyson; Kaur, Gurpreet; Levitt, Joshua; Girda, Eugenia; Goldfinger, Mendel; Goldberg, Gary L; Einstein, Mark H
2014-08-01
Obesity is an established risk factor for development of endometrial cancer. We hypothesized that obesity might also be associated with an earlier age at endometrial cancer diagnosis, because mechanisms that drive the obesity-endometrial cancer association might also accelerate tumorigenesis. A retrospective chart review was conducted of all cases of endometrial cancer diagnosed from 1999 to 2009 at a large medical center in New York City. The association of body mass index (BMI) with age at endometrial cancer diagnosis, comorbidities, stage, grade, and radiation treatment was examined using analysis of variance and linear regression. Overall survival by BMI category was assessed using Kaplan-Meier method and the log-rank test. A total of 985 cases of endometrial cancer were identified. The mean age at endometrial cancer diagnosis was 67.1 years (±11.9 standard deviation) in women with a normal BMI, whereas it was 56.3 years (±10.3 standard deviation) in women with a BMI greater than 50. Age at diagnosis of endometrioid-type cancer decreased linearly with increasing BMI (y=67.89-1.86x, R=0.049, P<.001). This association persisted after multivariable adjustment (R=0.181, P<.02). A linear association between BMI and age of nonendometrioid cancers was not found (P=.12). There were no differences in overall survival by BMI category. Obesity is associated with earlier age at diagnosis of endometrioid-type endometrial cancers. Similar associations were not, however, observed with nonendometrioid cancers, consistent with different pathways of tumorigenesis. II.
Deshpande, Pallavi O; Mohan, Vishwaraman; Thakurdesai, Prasad Arvind
2017-01-01
To evaluate acute oral toxicity (AOT), subchronic (90-day repeated dose) toxicity, mutagenicity, and genotoxicity potential of IDM01, the botanical composition of 4-hydroxyisoleucine- and trigonelline-based standardized fenugreek ( Trigonella foenum-graecum L) seed extract in laboratory rats. The AOT and subchronic (90-day repeated dose) toxicity were evaluated using Sprague-Dawley rats as per the Organisation for Economic Co-operation and Development (OECD) guidelines No. 423 and No. 408, respectively. During the subchronic study, the effects on body weight, food and water consumption, organ weights with hematology, clinical biochemistry, and histology were studied. The mutagenicity and genotoxicity of IDM01 were evaluated by reverse mutation assay (Ames test, OECD guideline No. 471) and chromosome aberration test (OECD guideline No. 473), respectively. The IDM01 did not show mortality or treatment-related adverse signs during acute (limit dose of 2000 mg/kg) and subchronic (90-day repeated dose of 250, 500, and 1000 mg/kg with 28 days of recovery period) administration. The IDM01 showed oral median lethal dose (LD50) >2000 mg/kg during AOT study. The no-observed adverse effect level (NOAEL) of IDM01 was 500 mg/kg. IDM01 did not show mutagenicity up to a concentration of 5000 μg/plate during Ames test and did not induce structural chromosomal aberrations up to 50 mg/culture. IDM01 was found safe during preclinical acute and subchronic (90-day repeated dose) toxicity in rats without mutagenicity or genotoxicity. Acute oral toxicity, subchronic (90-day) oral toxicity, mutagenicity and genotoxicity of IDM01 (4-hydroxyisoleucine- and trigonelline-based standardized fenugreek seed extract) was evaluated.The median lethal dose, LD50, of IDM01 was more than 2000 mg/kg of body weight in rats.No observed adverse effect level (NOAEL) of IDM01 was 500 mg/kg of body weight in rats.IDM01 was found safe during acute and subchronic oral toxicity studies in rats without mutagenicity or genotoxicity potetial. Abbreviations Used: 2-AA: 2-aminoanthracene; 2-AF: 2-aminofluorene; 4 NQNO: 4-nitroquinolene-N-oxide; 4HI: 4-hydroxyisoleucine; ANOVA: Analysis of variance; AOT: Acute oral toxicity; DM: Diabetes mellitus; IDM01: The Botanical composition of 4-hydroxyisoleucine- and trigonelline-based standardized fenugreek seed extract; LD50: Median lethal dose; MMS: Methyl methanesulfonate; NAD: No abnormality detected; OECD: Organisation for Economic Co-operation and Development; SD: Standard deviation; UV: Ultraviolet; VC: Vehicle control. 2-AA: 2-aminoanthracene; 2-AF: 2-aminofluorene; 4 NQNO: 4-nitroquinolene-N-oxide; 4HI: 4-hydroxyisoleucine; ANOVA: Analysis of variance; AOT: Acute oral toxicity; DM: Diabetes mellitus; IDM01: The Botanical composition of 4-hydroxyisoleucine- and trigonelline-based standardized fenugreek seed extract; LD50: Median lethal dose; MMS: Methyl methanesulfonate; NAD: No abnormality detected; OECD: Organisation for Economic Co-operation and Development; SD: Standard deviation; UV: Ultraviolet; VC: Vehicle control.
Deshpande, Pallavi O.; Mohan, Vishwaraman; Thakurdesai, Prasad Arvind
2017-01-01
Objective: To evaluate acute oral toxicity (AOT), subchronic (90-day repeated dose) toxicity, mutagenicity, and genotoxicity potential of IDM01, the botanical composition of 4-hydroxyisoleucine- and trigonelline-based standardized fenugreek (Trigonella foenum-graecum L) seed extract in laboratory rats. Materials and Methods: The AOT and subchronic (90-day repeated dose) toxicity were evaluated using Sprague-Dawley rats as per the Organisation for Economic Co-operation and Development (OECD) guidelines No. 423 and No. 408, respectively. During the subchronic study, the effects on body weight, food and water consumption, organ weights with hematology, clinical biochemistry, and histology were studied. The mutagenicity and genotoxicity of IDM01 were evaluated by reverse mutation assay (Ames test, OECD guideline No. 471) and chromosome aberration test (OECD guideline No. 473), respectively. Results: The IDM01 did not show mortality or treatment-related adverse signs during acute (limit dose of 2000 mg/kg) and subchronic (90-day repeated dose of 250, 500, and 1000 mg/kg with 28 days of recovery period) administration. The IDM01 showed oral median lethal dose (LD50) >2000 mg/kg during AOT study. The no-observed adverse effect level (NOAEL) of IDM01 was 500 mg/kg. IDM01 did not show mutagenicity up to a concentration of 5000 μg/plate during Ames test and did not induce structural chromosomal aberrations up to 50 mg/culture. Conclusions: IDM01 was found safe during preclinical acute and subchronic (90-day repeated dose) toxicity in rats without mutagenicity or genotoxicity. SUMMARY Acute oral toxicity, subchronic (90-day) oral toxicity, mutagenicity and genotoxicity of IDM01 (4-hydroxyisoleucine- and trigonelline-based standardized fenugreek seed extract) was evaluated.The median lethal dose, LD50, of IDM01 was more than 2000 mg/kg of body weight in rats.No observed adverse effect level (NOAEL) of IDM01 was 500 mg/kg of body weight in rats.IDM01 was found safe during acute and subchronic oral toxicity studies in rats without mutagenicity or genotoxicity potetial. Abbreviations Used: 2-AA: 2-aminoanthracene; 2-AF: 2-aminofluorene; 4 NQNO: 4-nitroquinolene-N-oxide; 4HI: 4-hydroxyisoleucine; ANOVA: Analysis of variance; AOT: Acute oral toxicity; DM: Diabetes mellitus; IDM01: The Botanical composition of 4-hydroxyisoleucine- and trigonelline-based standardized fenugreek seed extract; LD50: Median lethal dose; MMS: Methyl methanesulfonate; NAD: No abnormality detected; OECD: Organisation for Economic Co-operation and Development; SD: Standard deviation; UV: Ultraviolet; VC: Vehicle control. 2-AA: 2-aminoanthracene; 2-AF: 2-aminofluorene; 4 NQNO: 4-nitroquinolene-N-oxide; 4HI: 4-hydroxyisoleucine; ANOVA: Analysis of variance; AOT: Acute oral toxicity; DM: Diabetes mellitus; IDM01: The Botanical composition of 4-hydroxyisoleucine- and trigonelline-based standardized fenugreek seed extract; LD50: Median lethal dose; MMS: Methyl methanesulfonate; NAD: No abnormality detected; OECD: Organisation for Economic Co-operation and Development; SD: Standard deviation; UV: Ultraviolet; VC: Vehicle control PMID:28539737
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
Mollayeva, Tatyana; Colantonio, Angela; Cassidy, J David; Vernich, Lee; Moineddin, Rahim; Shapiro, Colin M
2017-06-01
Sleep stage disruption in persons with mild traumatic brain injury (mTBI) has received little research attention. We examined deviations in sleep stage distribution in persons with mTBI relative to population age- and sex-specific normative data and the relationships between such deviations and brain injury-related, medical/psychiatric, and extrinsic factors. We conducted a cross-sectional polysomnographic investigation in 40 participants diagnosed with mTBI (mean age 47.54 ± 11.30 years; 56% males). At the time of investigation, participants underwent comprehensive clinical and neuroimaging examinations and one full-night polysomnographic study. We used the 2012 American Academy of Sleep Medicine recommendations for recording, scoring, and summarizing sleep stages. We compared participants' sleep stage data with normative data stratified by age and sex to yield z-scores for deviations from available population norms and then employed stepwise multiple regression analyses to determine the factors associated with the identified significant deviations. In patients with mTBI, the mean duration of nocturnal wakefulness was higher and consolidated sleep stage N2 and REM were lower than normal (p < 0.0001, p = 0.018, and p = 0.010, respectively). In multivariate regression analysis, several covariates accounted for the variance in the relative changes in sleep stage duration. No sex differences were observed in the mean proportion of non-REM or REM sleep. We observed longer relative nocturnal wakefulness and shorter relative N2 and REM sleep in patients with mTBI, and these outcomes were associated with potentially modifiable variables. Addressing disruptions in sleep architecture in patients with mTBI could improve their health status. Copyright © 2017 Elsevier B.V. All rights reserved.
Down-Looking Interferometer Study II, Volume I,
1980-03-01
g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system
40 CFR 61.207 - Radium-226 sampling and measurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...
Precision gravimetric survey at the conditions of urban agglomerations
NASA Astrophysics Data System (ADS)
Sokolova, Tatiana; Lygin, Ivan; Fadeev, Alexander
2014-05-01
Large cities growth and aging lead to the irreversible negative changes of underground. The study of these changes at the urban area mainly based on the shallow methods of Geophysics, which extensive usage restricted by technogenic noise. Among others, precision gravimetry is allocated as method with good resistance to the urban noises. The main the objects of urban gravimetric survey are the soil decompaction, leaded to the rocks strength violation and the karst formation. Their gravity effects are too small, therefore investigation requires the modern high-precision equipment and special methods of measurements. The Gravimetry division of Lomonosov Moscow State University examin of modern precision gravimeters Scintrex CG-5 Autograv since 2006. The main performance characteristics of over 20 precision gravimeters were examined in various operational modes. Stationary mode. Long-term gravimetric measurements were carried at a base station. It shows that records obtained differ by high-frequency and mid-frequency (period 5 - 12 hours) components. The high-frequency component, determined as a standard deviation of measurement, characterizes the level of the system sensitivity to external noise and varies for different devices from 2 to 5-7 μGals. Midrange component, which closely meet to the rest of nonlinearity gravimeter drifts, is partially compensated by the equipment. This factor is very important in the case of gravimetric monitoring or observations, when midrange anomalies are the target ones. For the examined gravimeters, amplitudes' deviations, associated with this parameter may reach 10 μGals. Various transportation modes - were performed by walking (softest mode), lift (vertical overload), vehicle (horizontal overloads), boat (vertical plus horizontal overloads) and helicopter. The survey quality was compared by the variance of the measurement results and internal convergence of series. The measurement results variance (from ±2 to ±4 μGals) and its internal convergence are independent on transportation mode. Actually, measurements differ just by the processing time and appropriate number of readings. Important, that the internal convergence is the individual attribute of particular device. For the investigated gravimeters it varies from ±3 up to ±8 μGals. Various stability of the gravimeters location base. The most stable basis (minimum microseisms) in this experiment was a concrete pedestal, the least stable - point on the 28th floor. There is no direct dependence of the measurement results variance at the external noise level. Moreover, the external dispersion between different gravimeters is minimal in the point of the highest microseisms. Conclusions. The quality of the modern high-precision gravimeters Scintrex CG-5 Autograv measurements is determined by stability of the particular device, its standard deviation value and the nonlinearity drift degree. Despite the fact, that mentioned parameters of the tested gravimeters, generally corresponded to the factory characters, for the surveys required accuracy ±2-5 μGals, the best gravimeters should be selected. Practical gravimetric survey with such accuracy allowed reliable determination of the position of technical communication boxes and underground walkway in the urban area, indicated by gravity minimums with the amplitudes from 6-8 μGals and 1 - 15 meters width. The holes' parameters, obtained as the result of interpretationare well aligned with priori data.
A statistical data analysis and plotting program for cloud microphysics experiments
NASA Technical Reports Server (NTRS)
Jordan, A. J.
1981-01-01
The analysis software developed for atmospheric cloud microphysics experiments conducted in the laboratory as well as aboard a KC-135 aircraft is described. A group of four programs was developed and implemented on a Hewlett Packard 1000 series F minicomputer running under HP's RTE-IVB operating system. The programs control and read data from a MEMODYNE Model 3765-8BV cassette recorder, format the data on the Hewlett Packard disk subsystem, and generate statistical data (mean, variance, standard deviation) and voltage and engineering unit plots on a user selected plotting device. The programs are written in HP FORTRAN IV and HP ASSEMBLY Language with the graphics software using the HP 1000 Graphics. The supported plotting devices are the HP 2647A graphics terminal, the HP 9872B four color pen plotter, and the HP 2608A matrix line printer.
Reino, José L; Saiz-Urra, Liane; Hernandez-Galan, Rosario; Aran, Vicente J; Hitchcock, Peter B; Hanson, James R; Gonzalez, Maykel Perez; Collado, Isidro G
2007-06-27
Fourteen benzohydrazides have been synthesized and evaluated for their in vitro antifungal activity against the phytopathogenic fungus Botrytis cinerea. The best antifungal activity was observed for the N',N'-dibenzylbenzohydrazides 3b-d and for the N-aminoisoindoline-derived benzohydrazide 5. A quantitative structure-activity relationship (QSAR) study has been developed using a topological substructural molecular design (TOPS-MODE) approach to interpret the antifungal activity of these synthetic compounds. The model described 98.3% of the experimental variance, with a standard deviation of 4.02. The influence of an ortho substituent on the conformation of the benzohydrazides was investigated by X-ray crystallography and supported by QSAR study. Several aspects of the structure-activity relationships are discussed in terms of the contribution of different bonds to the antifungal activity, thereby making the relationships between structure and biological activity more transparent.
Ice/frost detection using millimeter wave radiometry. [space shuttle external tank
NASA Technical Reports Server (NTRS)
Gagliano, J. A.; Newton, J. M.; Davis, A. R.; Foster, M. L.
1981-01-01
A series of ice detection tests was performed on the shuttle external tank (ET) and on ET target samples using a 35/95 GHz instrumentation radiometer. Ice was formed using liquid nitrogen and water spray inside a test enclosure containing ET spray on foam insulation samples. During cryogenic fueling operations prior to the shuttle orbiter engine firing tests, ice was formed with freon and water over a one meter square section of the ET LOX tank. Data analysis was performed on the ice signatures, collected by the radiometer, using Georgia Tech computing facilities. Data analysis technique developed include: ice signature images of scanned ET target; pixel temperature contour plots; time correlation of target data with ice present versus no ice formation; and ice signature radiometric temperature statistical data, i.e., mean, variance, and standard deviation.
Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka
2015-01-01
The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.
Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.
2014-01-01
Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281
Barbado, David; Moreside, Janice; Vera-Garcia, Francisco J
2017-03-01
Although unstable seat methodology has been used to assess trunk postural control, the reliability of the variables that characterize it remains unclear. To analyze reliability and learning effect of center of pressure (COP) and kinematic parameters that characterize trunk postural control performance in unstable seating. The relationships between kinematic and COP parameters also were explored. Test-retest reliability design. Biomechanics laboratory setting. Twenty-three healthy male subjects. Participants volunteered to perform 3 sessions at 1-week intervals, each consisting of five 70-second balancing trials. A force platform and a motion capture system were used to measure COP and pelvis, thorax, and spine displacements. Reliability was assessed through standard error of measurement (SEM) and intraclass correlation coefficients (ICC 2,1 ) using 3 methods: (1) comparing the last trial score of each day; (2) comparing the best trial score of each day; and (3) calculating the average of the three last trial scores of each day. Standard deviation and mean velocity were calculated to assess balance performance. Although analyses of variance showed some differences in balance performance between days, these differences were not significant between days 2 and 3. Best result and average methods showed the greatest reliability. Mean velocity of the COP showed high reliability (0.71 < ICC < 0.86; 10.3 < SEM < 13.0), whereas standard deviation only showed a low to moderate reliability (0.37 < ICC < 0.61; 14.5 < SEM < 23.0). Regarding the kinematic variables, only pelvis displacement mean velocity achieved a high reliability using the average method (0.62 < ICC < 0.83; 18.8 < SEM < 23.1). Correlations between COP and kinematics were high only for mean velocity (0.45
Distribution of kriging errors, the implications and how to communicate them
NASA Astrophysics Data System (ADS)
Li, Hong Yi; Milne, Alice; Webster, Richard
2016-04-01
Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.
Distribution of kriging errors, the implications and how to communicate them
NASA Astrophysics Data System (ADS)
Li, HongYi; Milne, Alice; Webster, Richard
2015-04-01
Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Gray, Dean; LeVanseler, Kerri; Pan, Meide
2008-01-01
A single laboratory validation (SLV) was completed for a method to determine the flavonol aglycones quercetin, kaempferol, and isorhamnetin in Ginkgo biloba products. The method calculates total glycosides based on these aglycones formed following acid hydrolysis. Nine matrixes were chosen for the study, including crude leaf material, standardized dry powder extract, single and multiple entity finished products, and ethanol and glycerol tinctures. For the 9 matrixes evaluated as part of this SLV, the method appeared to be selective and specific, with no observed interferences. The simplified 60 min oven heating hydrolysis procedure was effective for each of the matrixes studied, with no apparent or consistent differences between 60, 75, and 90 min at 90°C. A Youden ruggedness trial testing 7 factors with the potential to affect quantitative results showed that 2 factors (volume hydrolyzed and test sample extraction/hydrolysis weight) were the most important parameters for control during sample preparation. The method performed well in terms of precision, with 4 matrixes tested in triplicate over a 3-day period showing an overall repeatability (relative standard deviation, RSD) of 2.3%. Analysis of variance testing at α = 0.05 showed no significant differences among the within- or between-group sources of variation, although comparisons of within-day (Sw), between-day (Sb), and total (St) precision showed that a majority of the standard deviation came from within-day determinations for all matrixes. Accuracy testing at 2 levels (approximately 30 and 90% of the determined concentrations in standardized dry powder extract) from 2 complex negative control matrixes showed an overall 96% recovery and RSD of 1.0% for the high spike, and 94% recovery and RSD of 2.5% for the low spike. HorRat scores were within the limits for performance acceptability, ranging from 0.4 to 1.3. Based on the performance results presented herein, it is recommended that this method progress to the collaborative laboratory trial. PMID:16001841
Escalante, Agustín; Haas, Roy W; del Rincón, Inmaculada
2004-01-01
Outcome assessment in patients with rheumatoid arthritis (RA) includes measurement of physical function. We derived a scale to quantify global physical function in RA, using three performance-based rheumatology function tests (RFTs). We measured grip strength, walking velocity, and shirt button speed in consecutive RA patients attending scheduled appointments at six rheumatology clinics, repeating these measurements after a median interval of 1 year. We extracted the underlying latent variable using principal component factor analysis. We used the Bayesian information criterion to assess the global physical function scale's cross-sectional fit to criterion standards. The criteria were joint tenderness, swelling, and deformity, pain, physical disability, current work status, and vital status at 6 years after study enrolment. We computed Guyatt's responsiveness statistic for improvement according to the American College of Rheumatology (ACR) definition. Baseline functional performance data were available for 777 patients, and follow-up data were available for 681. Mean ± standard deviation for each RFT at baseline were: grip strength, 14 ± 10 kg; walking velocity, 194 ± 82 ft/min; and shirt button speed, 7.1 ± 3.8 buttons/min. Grip strength and walking velocity departed significantly from normality. The three RFTs loaded strongly on a single factor that explained ≥70% of their combined variance. We rescaled the factor to vary from 0 to 100. Its mean ± standard deviation was 41 ± 20, with a normal distribution. The new global scale had a stronger fit than the primary RFT to most of the criterion standards. It correlated more strongly with physical disability at follow-up and was more responsive to improvement defined according to the ACR20 and ACR50 definitions. We conclude that a performance-based physical function scale extracted from three RFTs has acceptable distributional and measurement properties and is responsive to clinically meaningful change. It provides a parsimonious scale to measure global physical function in RA. PMID:15225367
Continuous variation caused by genes with graduated effects.
Matthysse, S; Lange, K; Wagener, D K
1979-01-01
The classical polygenic theory of inheritance postulates a large number of genes with small, and essentially similar, effects. We propose instead a model with genes of gradually decreasing effects. The resulting phenotypic distribution is not normal; if the gene effects are geometrically decreasing, it can be triangular. The joint distribution of parent and offspring genic value is calculated. The most readily testable difference between the two models is that, in the decreasing-effect model, the variance of the offspring distribution from given parents depends on the parents' genic values. The more the parents deviate from the mean, the smaller the variance of the offspring should be. In the equal-effect model the offspring variance is independent of the parents' genic values. PMID:288073
Culpepper, Steven Andrew
2016-06-01
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
NASA Astrophysics Data System (ADS)
Duffy, Ken; Lobunets, Olena; Suhov, Yuri
2007-05-01
We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.
Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.
2010-01-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273
Attentional effects on orientation judgements are dependent on memory consolidation processes.
Haskell, Christie; Anderson, Britt
2016-11-01
Are the effects of memory and attention on perception synergistic, antagonistic, or independent? Tested separately, memory and attention have been shown to affect the accuracy of orientation judgements. When multiple stimuli are presented sequentially versus simultaneously, error variance is reduced. When a target is validly cued, precision is increased. What if they are manipulated together? We combined memory and attention manipulations in an orientation judgement task to answer this question. Two circular gratings were presented sequentially or simultaneously. On some trials a brief luminance cue preceded the stimuli. Participants were cued to report the orientation of one of the two gratings by rotating a response grating. We replicated the finding that error variance is reduced on sequential trials. Critically, we found interacting effects of memory and attention. Valid cueing reduced the median, absolute error only when two stimuli appeared together and improved it to the level of performance on uncued sequential trials, whereas invalid cueing always increased error. This effect was not mediated by cue predictiveness; however, predictive cues reduced the standard deviation of the error distribution, whereas nonpredictive cues reduced "guessing". Our results suggest that, when the demand on memory is greater than a single stimulus, attention is a bottom-up process that prioritizes stimuli for consolidation. Thus attention and memory are synergistic.
Baeza-Baeza, J J; Pous-Torres, S; Torres-Lapasió, J R; García-Alvarez-Coque, M C
2010-04-02
Peak broadening and skewness are fundamental parameters in chromatography, since they affect the resolution capability of a chromatographic column. A common practice to characterise chromatographic columns is to estimate the efficiency and asymmetry factor for the peaks of one or more solutes eluted at selected experimental conditions. This has the drawback that the extra-column contributions to the peak variance and skewness make the peak shape parameters depend on the retention time. We propose and discuss here the use of several approaches that allow the estimation of global parameters (non-dependent on the retention time) to describe the column performance. The global parameters arise from different linear relationships that can be established between the peak variance, standard deviation, or half-widths with the retention time. Some of them describe exclusively the column contribution to the peak broadening, whereas others consider the extra-column effects also. The estimation of peak skewness was also possible for the approaches based on the half-widths. The proposed approaches were applied to the characterisation of different columns (Spherisorb, Zorbax SB, Zorbax Eclipse, Kromasil, Chromolith, X-Terra and Inertsil), using the chromatographic data obtained for several diuretics and basic drugs (beta-blockers). Copyright (c) 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mousavi Anzehaee, Mohammad; Adib, Ahmad; Heydarzadeh, Kobra
2015-10-01
The manner of microtremor data collection and filtering operation and also the method used for processing have a considerable effect on the accuracy of estimation of dynamic soil parameters. In this paper, running variance method was used to improve the automatic detection of data sections infected by local perturbations. In this method, the microtremor data running variance is computed using a sliding window. Then the obtained signal is used to remove the ranges of data affected by perturbations from the original data. Additionally, to determinate the fundamental frequency of a site, this study has proposed a statistical characteristics-based method. Actually, statistical characteristics, such as the probability density graph and the average and the standard deviation of all the frequencies corresponding to the maximum peaks in the H/ V spectra of all data windows, are used to differentiate the real peaks from the false peaks resulting from perturbations. The methods have been applied to the data recorded for the City of Meybod in central Iran. Experimental results show that the applied methods are able to successfully reduce the effects of extensive local perturbations on microtremor data and eventually to estimate the fundamental frequency more accurately compared to other common methods.
A better norm-referenced grading using the standard deviation criterion.
Chan, Wing-shing
2014-01-01
The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.
Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C
2009-11-01
During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.
Kharadov, A V
2002-01-01
Aberrations (quantitative chaetotactic deviations, i.e. decreasing or increasing of setae numbers and variations in arrangement of setae) and anomalies (qualitative chaetotactic deviations, for example, partial reduction of scutum, shortening of a seta more than 1.5-2 times, merging of setae) were recorded for 13 taxonomically important morphological structures in the chigger mite species Neotrombicula sympatrica Stekolnikov, 2001. 3308 specimens were studied as a total. 17.2% of them had various morphological deviations. The most common types of aberrations were observed in the number and positions of genualae I (94 specimens), AM seta (79 spec.) and sternal setae (77 spec.). The aberrations of sternal and coxal setae were usually interrelated: the sternal seta was "transferred" from the sternal area onto the coxa, or the other way round take place. The specimens having aberrations of sternal setae were twice as numerous as the specimens with aberrations of coxal setae (77 against 35). The specimens with aberrations of dorsal setae and mastitarsala were very rare (2 spec. each). Among anomalies, the presence of nude galeal seta (91 spec.) and scutal anomalies (66 spec.) were prevalent. The most frequently one form of deviation only was observed in one specimen of N. sympatrica. Nevertheless, the specimens simultaneously having several aberrations or anomalies were also found. 17 types of such combinations were observed, that counts 20.6% of all specimens with deviations. Symmetric deviations, namely the presence of two nude galeal setae (31 spec.), presence of 2 genualae on both legs I (4 spec.), presence of 2 AM (2 spec.) and symmetric reduction of scutal angles (1 spec.), sometimes cause troubles in diagnostics. The quarter of variance in N. sympatrica and in the species N. monticola Schluger et Davydov, 1967 formerly studied by the author turned out as almost identical. The specimens with deviations counted 14.5% of all studied specimens in the latter species. However, the structures of variance in these species is different. In N. monticola, the aberrations of humeral setae were dominant (71.6%) (Kharadov, Chirov, 2001), while in N. sympatrica, the aberrations of other structures were prevalent: genualae I (24.8%), AM (20.9%) and sternal setae (20.4%).
Anomalous volatility scaling in high frequency financial data
NASA Astrophysics Data System (ADS)
Nava, Noemi; Di Matteo, T.; Aste, Tomaso
2016-04-01
Volatility of intra-day stock market indices computed at various time horizons exhibits a scaling behaviour that differs from what would be expected from fractional Brownian motion (fBm). We investigate this anomalous scaling by using empirical mode decomposition (EMD), a method which separates time series into a set of cyclical components at different time-scales. By applying the EMD to fBm, we retrieve a scaling law that relates the variance of the components to a power law of the oscillating period. In contrast, when analysing 22 different stock market indices, we observe deviations from the fBm and Brownian motion scaling behaviour. We discuss and quantify these deviations, associating them to the characteristics of financial markets, with larger deviations corresponding to less developed markets.
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Vertical velocity variance in the mixed layer from radar wind profilers
Eng, K.; Coulter, R.L.; Brutsaert, W.
2003-01-01
Vertical velocity variance data were derived from remotely sensed mixed layer turbulence measurements at the Atmospheric Boundary Layer Experiments (ABLE) facility in Butler County, Kansas. These measurements and associated data were provided by a collection of instruments that included two 915 MHz wind profilers, two radio acoustic sounding systems, and two eddy correlation devices. The data from these devices were available through the Atmospheric Boundary Layer Experiment (ABLE) database operated by Argonne National Laboratory. A signal processing procedure outlined by Angevine et al. was adapted and further built upon to derive vertical velocity variance, w_pm???2, from 915 MHz wind profiler measurements in the mixed layer. The proposed procedure consisted of the application of a height-dependent signal-to-noise ratio (SNR) filter, removal of outliers plus and minus two standard deviations about the mean on the spectral width squared, and removal of the effects of beam broadening and vertical shearing of horizontal winds. The scatter associated with w_pm???2 was mainly affected by the choice of SNR filter cutoff values. Several different sets of cutoff values were considered, and the optimal one was selected which reduced the overall scatter on w_pm???2 and yet retained a sufficient number of data points to average. A similarity relationship of w_pm???2 versus height was established for the mixed layer on the basis of the available data. A strong link between the SNR and growth/decay phases of turbulence was identified. Thus, the mid to late afternoon hours, when strong surface heating occurred, were observed to produce the highest quality signals.
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Cespedes, Elizabeth M.; Hu, Frank B.; Redline, Susan; Rosner, Bernard; Alcantara, Carmela; Cai, Jianwen; Hall, Martica H.; Loredo, Jose S.; Mossavar-Rahmani, Yasmin; Ramos, Alberto R.; Reid, Kathryn J.; Shah, Neomi A.; Sotres-Alvarez, Daniela; Zee, Phyllis C.; Wang, Rui; Patel, Sanjay R.
2016-01-01
Most studies of sleep and health outcomes rely on self-reported sleep duration, although correlation with objective measures is poor. In this study, we defined sociodemographic and sleep characteristics associated with misreporting and assessed whether accounting for these factors better explains variation in objective sleep duration among 2,086 participants in the Hispanic Community Health Study/Study of Latinos who completed more than 5 nights of wrist actigraphy and reported habitual bed/wake times from 2010 to 2013. Using linear regression, we examined self-report as a predictor of actigraphy-assessed sleep duration. Mean amount of time spent asleep was 7.85 (standard deviation, 1.12) hours by self-report and 6.74 (standard deviation, 1.02) hours by actigraphy; correlation between them was 0.43. For each additional hour of self-reported sleep, actigraphy time spent asleep increased by 20 minutes (95% confidence interval: 19, 22). Correlations between self-reported and actigraphy-assessed time spent asleep were lower with male sex, younger age, sleep efficiency <85%, and night-to-night variability in sleep duration ≥1.5 hours. Adding sociodemographic and sleep factors to self-reports increased the proportion of variance explained in actigraphy-assessed sleep slightly (18%–32%). In this large validation study including Hispanics/Latinos, we demonstrated a moderate correlation between self-reported and actigraphy-assessed time spent asleep. The performance of self-reports varied by demographic and sleep measures but not by Hispanic subgroup. PMID:26940117
NASA Astrophysics Data System (ADS)
Monroe, Roberta Lynn
The intrinsic fundamental frequency effect among vowels is a vocalic phenomenon of adult speech in which high vowels have higher fundamental frequencies in relation to low vowels. Acoustic investigations of children's speech have shown that variability of the speech signal decreases as children's ages increase. Fundamental frequency measures have been suggested as an indirect metric for the development of laryngeal stability and coordination. Studies of the intrinsic fundamental frequency effect have been conducted among 8- and 9-year old children and in infants. The present study investigated this effect among 2- and 4-year old children. Eight 2-year old and eight 4-year old children produced four vowels, /ae/, /i/, /u/, and /a/, in CVC syllables. Three measures of fundamental frequency were taken. These were mean fundamental frequency, the intra-utterance standard deviation of the fundamental frequency, and the extent to which the cycle-to-cycle pattern of the fundamental frequency was predicted by a linear trend. An analysis of variance was performed to compare the two age groups, the four vowels, and the earlier and later repetitions of the CVC syllables. A significant difference between the two age groups was detected using the intra-utterance standard deviation of the fundamental frequency. Mean fundamental frequencies and linear trend analysis showed that voicing of the preceding consonant determined the statistical significance of the age-group comparisons. Statistically significant differences among the fundamental frequencies of the four vowels were not detected for either age group.
The composition of intern work while on call.
Fletcher, Kathlyn E; Visotcky, Alexis M; Slagle, Jason M; Tarima, Sergey; Weinger, Matthew B; Schapira, Marilyn M
2012-11-01
The work of house staff is being increasingly scrutinized as duty hours continue to be restricted. To describe the distribution of work performed by internal medicine interns while on call. Prospective time motion study on general internal medicine wards at a VA hospital affiliated with a tertiary care medical center and internal medicine residency program. Internal medicine interns. Trained observers followed interns during a "call" day. The observers continuously recorded the tasks performed by interns, using customized task analysis software. We measured the amount of time spent on each task. We calculated means and standard deviations for the amount of time spent on six categories of tasks: clinical computer work (e.g., writing orders and notes), non-patient communication, direct patient care (work done at the bedside), downtime, transit and teaching/learning. We also calculated means and standard deviations for time spent on specific tasks within each category. We compared the amount of time spent on the top three categories using analysis of variance. The largest proportion of intern time was spent in clinical computer work (40 %). Thirty percent of time was spent on non-patient communication. Only 12 % of intern time was spent at the bedside. Downtime activities, transit and teaching/learning accounted for 11 %, 5 % and 2 % of intern time, respectively. Our results suggest that during on call periods, relatively small amounts of time are spent on direct patient care and teaching/learning activities. As intern duty hours continue to decrease, attention should be directed towards preserving time with patients and increasing time in education.
Usefulness of multiple dimensions of fatigue in fibromyalgia.
Ericsson, Anna; Bremell, Tomas; Mannerkorpi, Kaisa
2013-07-01
To explore in which contexts ratings of multiple dimensions of fatigue are useful in fibromyalgia, and to compare multidimensional fatigue between women with fibromyalgia and healthy women. A cross-sectional study. The Multidimensional Fatigue Inventory (MFI-20), comprising 5 subscales of fatigue, was compared with the 1-dimensional subscale of fatigue from the Fibromyalgia Impact Questionnaire (FIQ) in 133 women with fibromyalgia (mean age 46 years; standard deviation 8.6), in association with socio-demographic and health-related aspects and analyses of explanatory variables of severe fatigue. The patients were also compared with 158 healthy women (mean age 45 years; standard deviation 9.1) for scores on MFI-20 and FIQ fatigue. The MFI-20 was associated with employment, physical activity and walking capacity (rs = -0.27 to -0.36), while FIQ fatigue was not. MFI-20 and FIQ fatigue were equally associated with pain, sleep, depression and anxiety (rs = 0.32-0.63). Regression analyses showed that the MFI-20 increased the explained variance (R2) for the models of pain intensity, sleep, depression and anxiety, by between 7 and 29 percentage points, compared with if FIQ fatigue alone was included in the models. Women with fibromyalgia rated their fatigue higher than healthy women for all subscales of the MFI-20 and the FIQ fatigue (p < 0.001). Dimensions of fatigue, assessed by the MFI-20, appear to be valuable in studies of employment, pain intensity, sleep, distress and physical function in women with fibromyalgia. The patients reported higher levels on all fatigue dimensions in comparison with healthy women.
Atay, Christina; Ryan, Sarah J; Lewis, Fiona M
2016-01-01
(1) To investigate outcomes in language competence and self-reported satisfaction with social relationships in long-term survivors of childhood traumatic brain injury (TBI); and (2) to establish whether language competence contributes to self-reported satisfaction with social relationships decades after sustaining childhood TBI. Twelve females and 8 males aged 30 to 55 (mean = 39.80, standard deviation = 7.54) years who sustained a TBI during childhood and were on average 31 years postinjury (standard deviation = 9.69). An additional 20 participants matched for age, sex, handedness, years of education, and socioeconomic status constituted a control group. Test of Language Competence-Expanded Edition and the Quality of Life in Brain Injury questionnaire. Individuals with a history of childhood TBI performed significantly poorer than their non-injured peers on 2 (Ambiguous Sentences and Oral Expression: Recreating Sentences) out of the 4 Test of Language Competence-Expanded Edition subtests used and on the Quality of Life in Brain Injury subscale assessing satisfaction with social relationships. In the TBI group, scores obtained on the Ambiguous Sentences subtest were found to be a significant predictor of satisfaction with social relationships, explaining 25% of the variance observed. The implication of high-level language skills to self-reported satisfaction with social relationships many decades post-childhood TBI suggests that ongoing monitoring of emerging language skills and support throughout the school years and into adulthood may be warranted if adult survivors of childhood TBI are to experience satisfying social relationships.
Estimation of the neural drive to the muscle from surface electromyograms
NASA Astrophysics Data System (ADS)
Hofmann, David
Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.
Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.
Bowman, Richard G; Caraway, David; Bentley, Ishmael
2013-01-01
Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.
NASA Astrophysics Data System (ADS)
Ramos-Méndez, José; Schuemann, Jan; Incerti, Sebastien; Paganetti, Harald; Schulte, Reinhard; Faddegon, Bruce
2017-08-01
Flagged uniform particle splitting was implemented with two methods to improve the computational efficiency of Monte Carlo track structure simulations with TOPAS-nBio by enhancing the production of secondary electrons in ionization events. In method 1 the Geant4 kernel was modified. In method 2 Geant4 was not modified. In both methods a unique flag number assigned to each new split electron was inherited by its progeny, permitting reclassification of the split events as if produced by independent histories. Computational efficiency and accuracy were evaluated for simulations of 0.5-20 MeV protons and 1-20 MeV u-1 carbon ions for three endpoints: (1) mean of the ionization cluster size distribution, (2) mean number of DNA single-strand breaks (SSBs) and double-strand breaks (DSBs) classified with DBSCAN, and (3) mean number of SSBs and DSBs classified with a geometry-based algorithm. For endpoint (1), simulation efficiency was 3 times lower when splitting electrons generated by direct ionization events of primary particles than when splitting electrons generated by the first ionization events of secondary electrons. The latter technique was selected for further investigation. The following results are for method 2, with relative efficiencies about 4.5 times lower for method 1. For endpoint (1), relative efficiency at 128 split electrons approached maximum, increasing with energy from 47.2 ± 0.2 to 66.9 ± 0.2 for protons, decreasing with energy from 51.3 ± 0.4 to 41.7 ± 0.2 for carbon. For endpoint (2), relative efficiency increased with energy, from 20.7 ± 0.1 to 50.2 ± 0.3 for protons, 15.6 ± 0.1 to 20.2 ± 0.1 for carbon. For endpoint (3) relative efficiency increased with energy, from 31.0 ± 0.2 to 58.2 ± 0.4 for protons, 23.9 ± 0.1 to 26.2 ± 0.2 for carbon. Simulation results with and without splitting agreed within 1% (2 standard deviations) for endpoints (1) and (2), within 2% (1 standard deviation) for endpoint (3). In conclusion, standard particle splitting variance reduction techniques can be successfully implemented in Monte Carlo track structure codes.
Gudimetla, V S Rao; Holmes, Richard B; Smith, Carey; Needham, Gregory
2012-05-01
The effect of anisotropic Kolmogorov turbulence on the log-amplitude correlation function for plane-wave fields is investigated using analysis, numerical integration, and simulation. A new analytical expression for the log-amplitude correlation function is derived for anisotropic Kolmogorov turbulence. The analytic results, based on the Rytov approximation, agree well with a more general wave-optics simulation based on the Fresnel approximation as well as with numerical evaluations, for low and moderate strengths of turbulence. The new expression reduces correctly to previously published analytic expressions for isotropic turbulence. The final results indicate that, as asymmetry becomes greater, the Rytov variance deviates from that given by the standard formula. This deviation becomes greater with stronger turbulence, up to moderate turbulence strengths. The anisotropic effects on the log-amplitude correlation function are dominant when the separation of the points is within the Fresnel length. In the direction of stronger turbulence, there is an enhanced dip in the correlation function at a separation close to the Fresnel length. The dip is diminished in the weak-turbulence axis, suggesting that energy redistribution via focusing and defocusing is dominated by the strong-turbulence axis. The new analytical expression is useful when anisotropy is observed in relevant experiments. © 2012 Optical Society of America
Development and Deployment of NASA's Budget Execution Dashboard
NASA Technical Reports Server (NTRS)
Putz, Peter
2009-01-01
This paper discusses the successful implementation of a highly visible company-wide management system and its potential to change managerial and accounting policies, processes and practices in support of organizational goals. Applying the conceptual framework of innovation in organizations, this paper describes the development and deployment process of the NASA Budget Execution Dashboard and the first two fiscal years of its use. It discusses the positive organizational changes triggered by the dashboard, like higher visibility of financial goals and variances between plans and actuals, increased involvement of all management levels in tracking and correcting of plan deviations, establishing comparable data standards across a strongly diversified organization, and enhanced communication between line organizations (NASA Centers) and product organizations (Mission Directorates). The paper also discusses the critical success factors experienced in this project: Strong leadership and division of management roles, rapid and responsive technology development, and frequent communication among stakeholders.
Liu, Timothy Y; Sanders, Jason L; Tsui, Fu-Chiang; Espino, Jeremy U; Dato, Virginia M; Suyama, Joe
2013-01-01
We studied the association between OTC pharmaceutical sales and volume of patients with influenza-like-illnesses (ILI) at an urgent care center over one year. OTC pharmaceutical sales explain 36% of the variance in the patient volume, and each standard deviation increase is associated with 4.7 more patient visits to the urgent care center (p<0.0001). Cross-correlation function analysis demonstrated that OTC pharmaceutical sales are significantly associated with patient volume during non-flu season (p<0.0001), but only the sales of cough and cold (p<0.0001) and thermometer (p<0.0001) categories were significant during flu season with a lag of two and one days, respectively. Our study is the first study to demonstrate and measure the relationship between OTC pharmaceutical sales and urgent care center patient volume, and presents strong evidence that OTC sales predict urgent care center patient volume year round.
NASA Technical Reports Server (NTRS)
Mozer, F. S.
1976-01-01
A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.
Aab, Alexander
2014-12-31
We report a study of the distributions of the depth of maximum, X max, of extensive air-shower profiles with energies above 10 17.8 eV as observed with the fluorescence telescopes of the Pierre Auger Observatory. The analysis method for selecting a data sample with minimal sampling bias is described in detail as well as the experimental cross-checks and systematic uncertainties. Furthermore, we discuss the detector acceptance and the resolution of the X max measurement and provide parametrizations thereof as a function of energy. Finally, the energy dependence of the mean and standard deviation of the X max distributions are comparedmore » to air-shower simulations for different nuclear primaries and interpreted in terms of the mean and variance of the logarithmic mass distribution at the top of the atmosphere.« less
NASA Astrophysics Data System (ADS)
Das, Kushal; Lehmann, Torsten
2014-07-01
The effect of ultra low operating temperature on mismatch among identically designed Silicon-on-Sapphire CMOS devices is investigated in detail from a circuit design view point. The evolution of transistor matching properties for different operating conditions at both room and 4.2 K temperature are presented. The statistical analysis reveals that mismatch at low temperature is effectively unrelated to that at room temperature, which disagrees with previously published literature. The measurement data was used to extract key transistor parameters and the consequence of temperature lowering on their respective variance is estimated. We find that standard deviation of the threshold-voltage mismatch deteriorates by a factor ∼2 at 4.2 K temperature. Similar to room temperature operation, mismatch at 4.2 K is bias point dependent and the degradation of matching at very low temperature depends to some extent on how the bias point shifts upon cooling.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Alghanim, Hussain; Antunes, Joana; Silva, Deborah Soares Bispo Santos; Alho, Clarice Sampaio; Balamurugan, Kuppareddi; McCord, Bruce
2017-11-01
Recent developments in the analysis of epigenetic DNA methylation patterns have demonstrated that certain genetic loci show a linear correlation with chronological age. It is the goal of this study to identify a new set of epigenetic methylation markers for the forensic estimation of human age. A total number of 27 CpG sites at three genetic loci, SCGN, DLX5 and KLF14, were examined to evaluate the correlation of their methylation status with age. These sites were evaluated using 72 blood samples and 91 saliva samples collected from volunteers with ages ranging from 5 to 73 years. DNA was bisulfite modified followed by PCR amplification and pyrosequencing to determine the level of DNA methylation at each CpG site. In this study, certain CpG sites in SCGN and KLF14 loci showed methylation levels that were correlated with chronological age, however, the tested CpG sites in DLX5 did not show a correlation with age. Using a 52-saliva sample training set, two age-predictor models were developed by means of a multivariate linear regression analysis for age prediction. The two models performed similarly with a single-locus model explaining 85% of the age variance at a mean absolute deviation of 5.8 years and a dual-locus model explaining 84% of the age variance with a mean absolute deviation of 6.2 years. In the validation set, the mean absolute deviation was measured to be 8.0 years and 7.1 years for the single- and dual-locus model, respectively. Another age predictor model was also developed using a 40-blood sample training set that accounted for 71% of the age variance. This model gave a mean absolute deviation of 6.6 years for the training set and 10.3years for the validation set. The results indicate that specific CpGs in SCGN and KLF14 can be used as potential epigenetic markers to estimate age using saliva and blood specimens. These epigenetic markers could provide important information in cases where the determination of a suspect's age is critical in developing investigative leads. Copyright © 2017. Published by Elsevier B.V.
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
Code of Federal Regulations, 2010 CFR
2010-04-01
... CLASS II GAMES § 547.17 How does a tribal gaming regulatory authority apply for a variance from these... 25 Indians 2 2010-04-01 2010-04-01 false How does a tribal gaming regulatory authority apply for a variance from these standards? 547.17 Section 547.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT...
ERIC Educational Resources Information Center
Vardeman, Stephen B.; Wendelberger, Joanne R.
2005-01-01
There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…
Lo, P; Young, S; Kim, H J; Brown, M S; McNitt-Gray, M F
2016-08-01
To investigate the effects of dose level and reconstruction method on density and texture based features computed from CT lung nodules. This study had two major components. In the first component, a uniform water phantom was scanned at three dose levels and images were reconstructed using four conventional filtered backprojection (FBP) and four iterative reconstruction (IR) methods for a total of 24 different combinations of acquisition and reconstruction conditions. In the second component, raw projection (sinogram) data were obtained for 33 lung nodules from patients scanned as a part of their clinical practice, where low dose acquisitions were simulated by adding noise to sinograms acquired at clinical dose levels (a total of four dose levels) and reconstructed using one FBP kernel and two IR kernels for a total of 12 conditions. For the water phantom, spherical regions of interest (ROIs) were created at multiple locations within the water phantom on one reference image obtained at a reference condition. For the lung nodule cases, the ROI of each nodule was contoured semiautomatically (with manual editing) from images obtained at a reference condition. All ROIs were applied to their corresponding images reconstructed at different conditions. For 17 of the nodule cases, repeat contours were performed to assess repeatability. Histogram (eight features) and gray level co-occurrence matrix (GLCM) based texture features (34 features) were computed for all ROIs. For the lung nodule cases, the reference condition was selected to be 100% of clinical dose with FBP reconstruction using the B45f kernel; feature values calculated from other conditions were compared to this reference condition. A measure was introduced, which the authors refer to as Q, to assess the stability of features across different conditions, which is defined as the ratio of reproducibility (across conditions) to repeatability (across repeat contours) of each feature. The water phantom results demonstrated substantial variability among feature values calculated across conditions, with the exception of histogram mean. Features calculated from lung nodules demonstrated similar results with histogram mean as the most robust feature (Q ≤ 1), having a mean and standard deviation Q of 0.37 and 0.22, respectively. Surprisingly, histogram standard deviation and variance features were also quite robust. Some GLCM features were also quite robust across conditions, namely, diff. variance, sum variance, sum average, variance, and mean. Except for histogram mean, all features have a Q of larger than one in at least one of the 3% dose level conditions. As expected, the histogram mean is the most robust feature in their study. The effects of acquisition and reconstruction conditions on GLCM features vary widely, though trending toward features involving summation of product between intensities and probabilities being more robust, barring a few exceptions. Overall, care should be taken into account for variation in density and texture features if a variety of dose and reconstruction conditions are used for the quantification of lung nodules in CT, otherwise changes in quantification results may be more reflective of changes due to acquisition and reconstruction conditions than in the nodule itself.
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J.; Burger, Oskar
2016-01-01
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45–49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. PMID:27022082
Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Yamazaki, Hiroshi
2018-01-01
We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.
Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.
Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R
2016-11-01
Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR 90.708 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...
Goldfarb, Charles A; Strauss, Nicole L; Wall, Lindley B; Calfee, Ryan P
2011-02-01
The measurement technique for ulnar variance in the adolescent population has not been well established. The purpose of this study was to assess the reliability of a standard ulnar variance assessment in the adolescent population. Four orthopedic surgeons measured 138 adolescent wrist radiographs for ulnar variance using a standard technique. There were 62 male and 76 female radiographs obtained in a standardized fashion for subjects aged 12 to 18 years. Skeletal age was used for analysis. We determined mean variance and assessed for differences related to age and gender. We also determined the interrater reliability. The mean variance was -0.7 mm for boys and -0.4 mm for girls; there was no significant difference between the 2 groups overall. When subdivided by age and gender, the younger group (≤ 15 y of age) was significantly less negative for girls (boys, -0.8 mm and girls, -0.3 mm, p < .05). There was no significant difference between boys and girls in the older group. The greatest difference between any 2 raters was 1 mm; exact agreement was obtained in 72 subjects. Correlations between raters were high (r(p) 0.87-0.97 in boys and 0.82-0.96 for girls). Interrater reliability was excellent (Cronbach's alpha, 0.97-0.98). Standard assessment techniques for ulnar variance are reliable in the adolescent population. Open growth plates did not interfere with this assessment. Young adolescent boys demonstrated a greater degree of negative ulnar variance compared with young adolescent girls. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
NASA Astrophysics Data System (ADS)
Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.
2017-11-01
Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Effect of laser frequency noise on fiber-optic frequency reference distribution
NASA Technical Reports Server (NTRS)
Logan, R. T., Jr.; Lutes, G. F.; Maleki, L.
1989-01-01
The effect of the linewidth of a single longitude-mode laser on the frequency stability of a frequency reference transmitted over a single-mode optical fiber is analyzed. The interaction of the random laser frequency deviations with the dispersion of the optical fiber is considered to determine theoretically the effect on the Allan deviation (square root of the Allan variance) of the transmitted frequency reference. It is shown that the magnitude of this effect may determine the limit of the ultimate stability possible for frequency reference transmission on optical fiber, but is not a serious limitation to present system performance.
The Influence of the Size Styles for the Touching Probe for the Roundness of the Deviation
NASA Astrophysics Data System (ADS)
Mizera, Ondrej; Cepova, Lenka
2017-12-01
The article deals with the influence of the sensing touch on the deviation of circularity on the WENZEL LH 65 X3M on 3D machining and using the Metrosoft QUARTIZ R6 software at the VŠB-TU Ostrava Laboratory, Faculty of Mechanical Engineering, Department of Machining, Assembly and engineering metrology. The aim was to analyze the influence of individual sensors on the different lengths of the pin and the diameter of the sensing metod on the accuracy of the measurement of the variance of the circularity.
Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.
Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A
2013-11-01
We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.
McGowan, Ian; Janocko, Laura; Burneisen, Shaun; Bhat, Anand; Richardson-Harman, Nicola
2015-01-01
To determine the intra- and inter-subject variability of mucosal cytokine gene expression in rectal biopsies from healthy volunteers and to screen cytokine and chemokine mRNA as potential biomarkers of mucosal inflammation. Rectal biopsies were collected from 8 participants (3 biopsies per participant) and 1 additional participant (10 biopsies). Quantitative reverse transcription polymerase chain reaction (RT-qPCR) was used to quantify IL-1β, IL-6, IL-12p40, IL-8, IFN-γ, MIP-1α, MIP-1β, RANTES, and TNF-α gene expression in the rectal tissue. The intra-assay, inter-biopsy and inter-subject variance was measured in the eight participants. Bootstrap re-sampling of the biopsy measurements was performed to determine the accuracy of gene expression data obtained for 10 biopsies obtained from one participant. Cytokines were both non-normalized and normalized using four reference genes (GAPDH, β-actin, β2 microglobulin, and CD45). Cytokine measurement accuracy was increased with the number of biopsy samples, per person; four biopsies were typically needed to produce a mean result within a 95% confidence interval of the subject's cytokine level approximately 80% of the time. Intra-assay precision (% geometric standard deviation) ranged between 8.2 and 96.9 with high variance between patients and even between different biopsies from the same patient. Variability was not greatly reduced with the use of reference genes to normalize data. The number of biopsy samples required to provide an accurate result varied by target although 4 biopsy samples per subject and timepoint, provided for >77% accuracy across all targets tested. Biopsies within the same subjects and between subjects had similar levels of variance while variance within a biopsy (intra-assay) was generally lower. Normalization of inflammatory cytokines against reference genes failed to consistently reduce variance. The accuracy and reliability of mRNA expression of inflammatory cytokines will set a ceiling on the ability of these measures to predict mucosal inflammation. Techniques to reduce variability should be developed within a larger cohort of individuals before normative reference values can be validated. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hinton, Pamela S; Johnstone, Brick; Blaine, Edward; Bodling, Angela
2011-09-01
To determine the relative influence of current exercise and diet on the late-life cognitive health of former Division I collision-sport collegiate athletes (ie, football players) compared with noncollision-sport athletes and non-athletes. Graduates (n = 400) of a Midwestern university (average age, 64.09 years; standard deviation, 13.32) completed a self-report survey to assess current demographics/physical characteristics, exercise, diet, cognitive difficulties, and physical and mental health. Former football players reported more cognitive difficulties, as well as worse physical and mental health than controls. Among former football players, greater intake of total and saturated fat and cholesterol and lower overall diet quality were significantly correlated with cognitive difficulties; current dietary intake was not associated with cognitive health for the noncollision-sport athletes or nonathletes. Hierarchical regressions predicting cognitive difficulties indicated that income was positively associated with fewer cognitive difficulties and predicted 8% of the variance; status as a former football player predicted an additional 2% of the variance; and the interaction between being a football player and total dietary fat intake significantly predicted an additional 6% of the total variance (total model predicted 16% of variance). Greater intake of dietary fat was associated with increased cognitive difficulties, but only in the former football players, and not in the controls. Prior participation in football was associated with worse physical and mental health, while more frequent vigorous exercise was associated with higher physical and mental health ratings. Former football players reported more late-life cognitive difficulties and worse physical and mental health than former noncollision-sport athletes and nonathletes. A novel finding of the present study is that current dietary fat was associated with more cognitive difficulties, but only in the former football players. These results suggest the need for educational interventions to encourage healthy dietary habits to promote the long-term cognitive health of collision-sport athletes.
N2/O2/H2 Dual-Pump Cars: Validation Experiments
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Cutler, A. D.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
Combining the Hanning windowed interpolated FFT in both directions
NASA Astrophysics Data System (ADS)
Chen, Kui Fu; Li, Yan Feng
2008-06-01
The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.
Deleterious Mutations, Apparent Stabilizing Selection and the Maintenance of Quantitative Variation
Kondrashov, A. S.; Turelli, M.
1992-01-01
Apparent stabilizing selection on a quantitative trait that is not causally connected to fitness can result from the pleiotropic effects of unconditionally deleterious mutations, because as N. Barton noted, ``... individuals with extreme values of the trait will tend to carry more deleterious alleles ....'' We use a simple model to investigate the dependence of this apparent selection on the genomic deleterious mutation rate, U; the equilibrium distribution of K, the number of deleterious mutations per genome; and the parameters describing directional selection against deleterious mutations. Unlike previous analyses, we allow for epistatic selection against deleterious alleles. For various selection functions and realistic parameter values, the distribution of K, the distribution of breeding values for a pleiotropically affected trait, and the apparent stabilizing selection function are all nearly Gaussian. The additive genetic variance for the quantitative trait is kQa(2), where k is the average number of deleterious mutations per genome, Q is the proportion of deleterious mutations that affect the trait, and a(2) is the variance of pleiotropic effects for individual mutations that do affect the trait. In contrast, when the trait is measured in units of its additive standard deviation, the apparent fitness function is essentially independent of Q and a(2); and β, the intensity of selection, measured as the ratio of additive genetic variance to the ``variance'' of the fitness curve, is very close to s = U/k, the selection coefficient against individual deleterious mutations at equilibrium. Therefore, this model predicts appreciable apparent stabilizing selection if s exceeds about 0.03, which is consistent with various data. However, the model also predicts that β must equal V(m)/V(G), the ratio of new additive variance for the trait introduced each generation by mutation to the standing additive variance. Most, although not all, estimates of this ratio imply apparent stabilizing selection weaker than generally observed. A qualitative argument suggests that even when direct selection is responsible for most of the selection observed on a character, it may be essentially irrelevant to the maintenance of variation for the character by mutation-selection balance. Simple experiments can indicate the fraction of observed stabilizing selection attributable to the pleiotropic effects of deleterious mutations. PMID:1427047
Genetic analysis of Holstein cattle populations in Brazil and the United States.
Costa, C N; Blake, R W; Pollak, E J; Oltenacu, P A; Quaas, R L; Searle, S R
2000-12-01
Genetic relationships between Brazilian and US Holstein cattle populations were studied using first-lactation records of 305-d mature equivalent (ME) yields of milk and fat of daughters of 705 sires in Brazil and 701 sires in the United States, 358 of which had progeny in both countries. Components of(co)variance and genetic parameters were estimated from all data and from within herd-year standard deviation for milk (HYSD) data files using bivariate and multivariate sire models and DFREML procedures distinguishing the two countries. Sire (residual) variances from all data for milk yield were 51 to 59% (58 to 101%) as large in Brazil as those obtained from half-sisters in the average US herd. Corresponding proportions of the US variance in fat yield that were found in Brazil were 30 to 41% for the sire component of variance and 48 to 80% for the residual. Heritabilities for milk and fat yields from multivariate analysis of all the data were 0.25 and 0.22 in Brazil, and 0.34 and 0.35 in the United States. Genetic correlations between milk and fat were 0.79 in Brazil and 0.62 in the United States. Genetic correlations between countries were 0.85 for milk, 0.88 for fat, 0.55 for milk in Brazil and fat in the US, and 0.67 for fat in Brazil and milk in the United States. Correlated responses in Brazil from sire selection based on the US information increased with average HYSD in Brazil. Largest daughter yield response was predicted from information from half-sisters in low HYSD US herds (0.75 kg/kg for milk; 0.63 kg/kg for fat), which was 14% to 17% greater than estimates from all US herds because the scaling effects were less severe from heterogeneous variances. Unequal daughter response from unequal genetic (co)variances under restrictive Brazilian conditions is evidence for the interaction of genotype and environment. The smaller and variable yield expectations of daughters of US sires in Brazilian environments suggest the need for specific genetic improvement strategies in Brazilian Holstein herds. A US data file restricting daughter information to low HYSD US environments would be a wise choice for across-country evaluation. Procedures to incorporate such foreign evaluations should be explored to improve the accuracy of genetic evaluations for the Brazilian Holstein population.
Analyzing Spatial and Temporal Variation in Precipitation Estimates in a Coupled Model
NASA Astrophysics Data System (ADS)
Tomkins, C. D.; Springer, E. P.; Costigan, K. R.
2001-12-01
Integrated modeling efforts at the Los Alamos National Laboratory aim to simulate the hydrologic cycle and study the impacts of climate variability and land use changes on water resources and ecosystem function at the regional scale. The integrated model couples three existing models independently responsible for addressing the atmospheric, land surface, and ground water components: the Regional Atmospheric Model System (RAMS), the Los Alamos Distributed Hydrologic System (LADHS), and the Finite Element and Heat Mass (FEHM). The upper Rio Grande Basin, extending 92,000 km2 over northern New Mexico and southern Colorado, serves as the test site for this model. RAMS uses nested grids to simulate meteorological variables, with the smallest grid over the Rio Grande having 5-km horizontal grid spacing. As LADHS grid spacing is 100 m, a downscaling approach is needed to estimate meteorological variables from the 5km RAMS grid for input into LADHS. This study presents daily and cumulative precipitation predictions, in the month of October for water year 1993, and an approach to compare LADHS downscaled precipitation to RAMS-simulated precipitation. The downscaling algorithm is based on kriging, using topography as a covariate to distribute the precipitation and thereby incorporating the topographical resolution achieved at the 100m-grid resolution in LADHS. The results of the downscaling are analyzed in terms of the level of variance introduced into the model, mean simulated precipitation, and the correlation between the LADHS and RAMS estimates. Previous work presented a comparison of RAMS-simulated and observed precipitation recorded at COOP and SNOTEL sites. The effects of downscaling the RAMS precipitation were evaluated using Spearman and linear correlations and by examining the variance of both populations. The study focuses on determining how the downscaling changes the distribution of precipitation compared to the RAMS estimates. Spearman correlations computed for the LADHS and RAMS cumulative precipitation reveal a disassociation over time, with R equal to 0.74 at day eight and R equal to 0.52 at day 31. Linear correlation coefficients (Pearson) returned a stronger initial correlation of 0.97, decreasing to 0.68. The standard deviations for the 2500 LADHS cells underlying each 5km RAMS cell range from 8 mm to 695 mm in the Sangre de Cristo Mountains and 2 mm to 112 mm in the San Luis Valley. Comparatively, the standard deviations of the RAMS estimates in these regions are 247 mm and 30 mm respectively. The LADHS standard deviations provide a measure of the variability introduced through the downscaling routine, which exceeds RAMS regional variability by a factor of 2 to 4. The coefficient of variation for the average LADHS grid cell values and the RAMS cell values in the Sangre de Cristo Mountains are 0.66 and 0.27, respectively, and 0.79 and 0.75 in the San Luis Valley. The coefficients of variation evidence the uniformity of the higher precipitation estimates in the mountains, especially for RAMS, and also the lower means and variability found in the valley. Additionally, Kolmogorov-Smirnov tests indicate clear spatial and temporal differences in mean simulated precipitation across the grid.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José
2018-03-28
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José
2018-01-01
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a written...) The CSO may forward the application to the Chief Health, Safety and Security Officer. (2) If the CSO...
Eiler, J; Kleinholdermann, U; Albers, D; Dahms, J; Hermann, F; Behrens, C; Luedemann, M; Klingmueller, V; Alzen, G F P
2012-10-01
Ultrasound elastography by acoustic radiation force impulse imaging (ARFI) is used in adults for non invasive measurement of liver stiffness, indicating liver diseases like fibrosis. To establish ARFI in children and adolescents we determined standard values of healthy liver tissue and analysed potentially influencing factors. 132 patients between 0 and 17 years old were measured using ARFI. None of them had any liver disease or any other disease that could affect the liver secondarily. All patients had a normal ultrasound scan, a normal BMI and normal liver function tests. The mean value of all ARFI measurements was calculated and potentially influencing factors were analysed. The mean value of all ARFI elastography measurements was 1.16 m/sec (SD ± 0.14 m/sec). Neither age (p = 0.533) nor depth of measurement (p = 0.066) had no significant influence on ARFI values, whereas a significant effect of gender was found with lower ARFI values in females (p = 0.025), however, there was no significant interaction between age groups (before or after puberty) and gender (p = 0.276). There was an interlobar difference with lower values in the right liver lobe compared to the left (p = 0.036) and with a significantly lower variance (p < 0.001). Consistend values were measured by different examiners (p = 0.108), however, the inter examiner variance deviated significantly (p < 0.001). ARFI elastography is a reliable method to measure liver stiffness in children and adolescents. In relation to studies which analyse liver diseases, the standard value of 1.16 m/sec (± 0.14 m/sec) allows a differentiation of healthy versus pathological liver tissue. © Georg Thieme Verlag KG Stuttgart · New York.
Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Thompson, Bruce
2009-01-01
Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…
Anatomy of emotion: a 3D study of facial mimicry.
Ferrario, V F; Sforza, C
2007-01-01
Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.
Wongkornchaowalit, Norachai; Lertchirakarn, Veera
2011-03-01
Important limitations of mineral trioxide aggregate for use in clinical procedures are extended setting time and difficult handling characteristics. The removal of gypsum at the end stage of the Portland cement manufacturing process and polycarboxylate superplasticizer admixture may solve these limitations. Different concentrations of polycarboxylate superplasticizer (0%, 1.2%, 1.8%, and 2.4% by volume) and liquid-to-powder ratios (0.27, 0.30, and 0.33 by weight) were mixed with white Portland cement without gypsum (AWPC-experimental material). Type 1 ordinary white Portland cement mixed with distilled water at the same ratios as the experimental material was used as controls. All samples were tested for setting time and flowability according to the International Organization for Standardization 6876:2001 guideline. The data were analyzed by two-way analysis of variance. Then, one-way analysis of variance and multiple comparison tests were used to analyze the significance among groups. The data are presented in mean ± standard deviation values. In all experimental groups, the setting times were in the range of 4.2 ± 0.4 to 11.3 ± 0.2 minutes, which were significantly (p < 0.05) lower than the control groups (26.0 ± 2.4 to 54.8 ± 2.5 minutes). The mean flows of AWPC plus 1.8% and 2.4% polycarboxylate superplasticizer groups were significantly increased (p < 0.001) at all liquid-to-powder ratios compared with control groups. Polycarboxylate superplasticizer at concentrations of 1.8% and 2.4% and the experimental liquid-to-powder ratios reduced setting time and increased flowability of cement, which would be beneficial for clinical use. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Quantifying economic fluctuations by adapting methods of statistical physics
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki
2001-09-01
The first focus of this thesis is the investigation of cross-correlations between the price fluctuations of different stocks using the conceptual framework of random matrix theory (RMT), developed in physics to describe the statistical properties of energy-level spectra of complex nuclei. RMT makes predictions for the statistical properties of matrices that are universal, i.e., do not depend on the interactions between the elements comprising the system. In physical systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system so this framework is of potential value if applied to economic systems. This thesis compares the statistics of cross-correlation matrix
Matsuda, Aya; Hara, Takeshi; Miyata, Kazunori; Matsuo, Hiroshi; Murata, Hiroshi; Mayama, Chihiro; Asaoka, Ryo
2015-09-01
To study the efficacy of pattern deviation (PD) values in the estimation of visual field compensating the influence of cataract in eyes with glaucoma. The study subjects comprised of 48 eyes of 37 glaucoma patients. Mean total deviation value (mTDs) on Humphrey Field Analyzer after cataract surgery was compared with mean PD (mPD) before the surgery. Visual field measurements were carried out ≤6 months before (VF(pre)) and following (VF(post)) successful cataract surgery. The difference between the mPD or mTD values in the VF(pre) and mTD values in the VF(post) (denoted as εmPD/ΔmTD) was calculated, and the influence of the extent of 'true' glaucomatous visual field damage or cataract (as represented by εmPD and ΔmTD, respectively) on this difference was also investigated. There was a significant difference between mTD in the VF(pre) and mTD in the VF(post) (p<0.001, repeated measures analysis of variance). There was not a significant difference between mPD in the VF(pre) and mTD in the VF(post) (p=0.06); however, εmPD was significantly correlated with the mTD in VF(post) and also ΔmTD (R(2)=0.56 and 0.27, p<0.001, Pearson's correlation). The accurate prediction of the mTD in the VF(post) can be achieved using the pattern standard deviation (PSD), mTD and also visual acuity before surgery. Clinicians should be very careful when reviewing the VF of a patient with glaucoma and cataract since PD values may underestimate glaucomatous VF damage in patients with advanced disease and also overestimate glaucomatous VF damage in patients with early to moderate cataract. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
2012-01-01
Background The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. Methods To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. Results The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. Conclusions Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption. PMID:22490226
Fundamental movement skills and balance of children with Down syndrome.
Capio, C M; Mak, T C T; Tse, M A; Masters, R S W
2018-03-01
Conclusive evidence supports the importance of fundamental movement skills (FMS) proficiency in promoting physical activity and countering obesity. In children with Down Syndrome (DS), FMS development is delayed, which has been suggested to be associated with balance deficits. This study therefore examined the relationship between FMS proficiency and balance ability in children with DS, with the aim of contributing evidence to programmes that address FMS delay. Participants consisted of 20 children with DS (7.1 ± 2.9 years old) and an age-matched control group of children with typical development (7.25 ± 2.5 years). In the first part of the study, FMS (i.e. locomotor and object control) proficiency of the children was tested using the Test of Gross Motor Development-2. Balance ability was assessed using a force platform to measure centre of pressure average velocity (AV; mm/sec), path length (mm), medio-lateral standard deviation (mm) and antero-posterior standard deviation (mm). In the second part of the study, children with DS participated in 5 weeks of FMS training. FMS proficiency and balance ability were tested post-training and compared to pre-training scores. Verbal and visuo-spatial short-term memory capacities were measured at pre-training to verify the role of working memory in skill learning. FMS proficiency was associated with centre of pressure parameters in children with DS but not in children with typical development. After controlling for age, AV was found to predict significant variance in locomotor (R 2 = 0.61, P < 0.001) and object control (R 2 = 0.69, P < 0.001) scores. FMS proficiency and mastery improved after FMS training, as did AV, path length and antero-posterior standard deviation (all P < 0.05). Verbal and visuo-spatial short-term memory did not interact with the effects of training. Children with DS who have better balance ability tend to have more proficient FMS. Skill-specific training improved not only FMS sub-skills but static balance stability as well. Working memory did not play a role in the changes caused by skills training. Future research should examine the causal relationship between balance and FMS. © 2017 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
The effects of auditory stimulation with music on heart rate variability in healthy women.
Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de
2013-07-01
There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.
The effects of auditory stimulation with music on heart rate variability in healthy women
Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos
2013-01-01
OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660
USL/DBMS NASA/PC R and D project C programming standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.
Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils
Boyes, Richard G.; Gunter, Jeff L.; Frost, Chris; Janke, Andrew L.; Yeatman, Thomas; Hill, Derek L.G.; Bernstein, Matt A.; Thompson, Paul M.; Weiner, Michael W.; Schuff, Norbert; Alexander, Gene E.; Killiany, Ronald J.; DeCarli, Charles; Jack, Clifford R.; Fox, Nick C.
2008-01-01
Measures of structural brain change based on longitudinal MR imaging are increasingly important but can be degraded by intensity non-uniformity. This non-uniformity can be more pronounced at higher field strengths, or when using multichannel receiver coils. We assessed the ability of the non-parametric non-uniform intensity normalization (N3) technique to correct non-uniformity in 72 volumetric brain MR scans from the preparatory phase of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Normal elderly subjects (n = 18) were scanned on different 3-T scanners with a multichannel phased array receiver coil at baseline, using magnetization prepared rapid gradient echo (MP-RAGE) and spoiled gradient echo (SPGR) pulse sequences, and again 2 weeks later. When applying N3, we used five brain masks of varying accuracy and four spline smoothing distances (d = 50, 100, 150 and 200 mm) to ascertain which combination of parameters optimally reduces the non-uniformity. We used the normalized white matter intensity variance (standard deviation/mean) to ascertain quantitatively the correction for a single scan; we used the variance of the normalized difference image to assess quantitatively the consistency of the correction over time from registered scan pairs. Our results showed statistically significant (p < 0.01) improvement in uniformity for individual scans and reduction in the normalized difference image variance when using masks that identified distinct brain tissue classes, and when using smaller spline smoothing distances (e.g., 50-100 mm) for both MP-RAGE and SPGR pulse sequences. These optimized settings may assist future large-scale studies where 3-T scanners and phased array receiver coils are used, such as ADNI, so that intensity non-uniformity does not influence the power of MR imaging to detect disease progression and the factors that influence it. PMID:18063391
NASA Astrophysics Data System (ADS)
Varghese, Bino; Hwang, Darryl; Mohamed, Passant; Cen, Steven; Deng, Christopher; Chang, Michael; Duddalwar, Vinay
2017-11-01
Purpose: To evaluate potential use of wavelets analysis in discriminating benign and malignant renal masses (RM) Materials and Methods: Regions of interest of the whole lesion were manually segmented and co-registered from multiphase CT acquisitions of 144 patients (98 malignant RM: renal cell carcinoma (RCC) and 46 benign RM: oncocytoma, lipid-poor angiomyolipoma). Here, the Haar wavelet was used to analyze the grayscale images of the largest segmented tumor in the axial direction. Six metrics (energy, entropy, homogeneity, contrast, standard deviation (SD) and variance) derived from 3-levels of image decomposition in 3 directions (horizontal, vertical and diagonal) respectively, were used to quantify tumor texture. Independent t-test or Wilcoxon rank sum test depending on data normality were used as exploratory univariate analysis. Stepwise logistic regression and receiver operator characteristics (ROC) curve analysis were used to select predictors and assess prediction accuracy, respectively. Results: Consistently, 5 out of 6 wavelet-based texture measures (except homogeneity) were higher for malignant tumors compared to benign, when accounting for individual texture direction. Homogeneity was consistently lower in malignant than benign tumors irrespective of direction. SD and variance measured in the diagonal direction on the corticomedullary phase showed significant (p<0.05) difference between benign versus malignant tumors. The multivariate model with variance (3 directions) and SD (vertical direction) extracted from the excretory and pre-contrast phase, respectively showed an area under the ROC curve (AUC) of 0.78 (p < 0.05) in discriminating malignant from benign. Conclusion: Wavelet analysis is a valuable texture evaluation tool to add to a radiomics platforms geared at reliably characterizing and stratifying renal masses.
Relationship Between General Nutrition Knowledge and Dietary Quality in Elite Athletes.
Spronk, Inge; Heaney, Susan E; Prvan, Tania; O'Connor, Helen T
2015-06-01
This study investigated the association between general nutrition knowledge and dietary quality in a convenience sample of athletes (≥ state level) recruited from four Australian State Sport Institutes. General nutrition knowledge was measured by the validated General Nutrition Knowledge Questionnaire and diet quality by an adapted version of the Australian Recommended Food Score (A-ARFS) calculated from food frequency questionnaire data. Analysis of variance and linear modeling were used to assess relationships between variables. mean (Standard Deviation). A total of 101 athletes (Males: 37; Females: 64), 18.6 (4.6) years were recruited mainly from team sports (72.0%). Females scored higher than males for both nutrition knowledge (Females: 59.9%; Males: 55.6%; p = .017) and total A-ARFS (Females: 54.2% Males: 49.4%; p = .016). There was no significant influence of age, level of education, athletic caliber or team/individual sport participation on nutrition knowledge or total A-ARFS. However, athletes engaged in previous dietetic consultation had significantly higher nutrition knowledge (61.6% vs. 56.6%; p = .034) but not total A-ARFS (53.6% vs. 52.0%; p = .466). Nutrition knowledge was weakly but positively associated with total A-ARFS (r = .261, p= .008) and A-ARFS vegetable subgroup (r = .252, p = .024) independently explaining 6.8% and 5.1% of the variance respectively. Gender independently explained 5.6% of the variance in nutrition knowledge (p= .017) and 6.7% in total A-ARFS (p = .016). Higher nutrition knowledge and female gender were weakly but positively associated with better diet quality. Given the importance of nutrition to health and optimal sports performance, intervention to improve nutrition knowledge and healthy eating is recommended, especially for young male athletes.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust
Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin
2015-01-01
Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881
Felix, Janine F.; Bradfield, Jonathan P.; Monnereau, Claire; van der Valk, Ralf J.P.; Stergiakouli, Evie; Chesi, Alessandra; Gaillard, Romy; Feenstra, Bjarke; Thiering, Elisabeth; Kreiner-Møller, Eskil; Mahajan, Anubha; Pitkänen, Niina; Joro, Raimo; Cavadino, Alana; Huikari, Ville; Franks, Steve; Groen-Blokhuis, Maria M.; Cousminer, Diana L.; Marsh, Julie A.; Lehtimäki, Terho; Curtin, John A.; Vioque, Jesus; Ahluwalia, Tarunveer S.; Myhre, Ronny; Price, Thomas S.; Vilor-Tejedor, Natalia; Yengo, Loïc; Grarup, Niels; Ntalla, Ioanna; Ang, Wei; Atalay, Mustafa; Bisgaard, Hans; Blakemore, Alexandra I.; Bonnefond, Amelie; Carstensen, Lisbeth; Eriksson, Johan; Flexeder, Claudia; Franke, Lude; Geller, Frank; Geserick, Mandy; Hartikainen, Anna-Liisa; Haworth, Claire M.A.; Hirschhorn, Joel N.; Hofman, Albert; Holm, Jens-Christian; Horikoshi, Momoko; Hottenga, Jouke Jan; Huang, Jinyan; Kadarmideen, Haja N.; Kähönen, Mika; Kiess, Wieland; Lakka, Hanna-Maaria; Lakka, Timo A.; Lewin, Alexandra M.; Liang, Liming; Lyytikäinen, Leo-Pekka; Ma, Baoshan; Magnus, Per; McCormack, Shana E.; McMahon, George; Mentch, Frank D.; Middeldorp, Christel M.; Murray, Clare S.; Pahkala, Katja; Pers, Tune H.; Pfäffle, Roland; Postma, Dirkje S.; Power, Christine; Simpson, Angela; Sengpiel, Verena; Tiesler, Carla M. T.; Torrent, Maties; Uitterlinden, André G.; van Meurs, Joyce B.; Vinding, Rebecca; Waage, Johannes; Wardle, Jane; Zeggini, Eleftheria; Zemel, Babette S.; Dedoussis, George V.; Pedersen, Oluf; Froguel, Philippe; Sunyer, Jordi; Plomin, Robert; Jacobsson, Bo; Hansen, Torben; Gonzalez, Juan R.; Custovic, Adnan; Raitakari, Olli T.; Pennell, Craig E.; Widén, Elisabeth; Boomsma, Dorret I.; Koppelman, Gerard H.; Sebert, Sylvain; Järvelin, Marjo-Riitta; Hyppönen, Elina; McCarthy, Mark I.; Lindi, Virpi; Harri, Niinikoski; Körner, Antje; Bønnelykke, Klaus; Heinrich, Joachim; Melbye, Mads; Rivadeneira, Fernando; Hakonarson, Hakon; Ring, Susan M.; Smith, George Davey; Sørensen, Thorkild I.A.; Timpson, Nicholas J.; Grant, Struan F.A.; Jaddoe, Vincent W.V.
2016-01-01
A large number of genetic loci are associated with adult body mass index. However, the genetics of childhood body mass index are largely unknown. We performed a meta-analysis of genome-wide association studies of childhood body mass index, using sex- and age-adjusted standard deviation scores. We included 35 668 children from 20 studies in the discovery phase and 11 873 children from 13 studies in the replication phase. In total, 15 loci reached genome-wide significance (P-value < 5 × 10−8) in the joint discovery and replication analysis, of which 12 are previously identified loci in or close to ADCY3, GNPDA2, TMEM18, SEC16B, FAIM2, FTO, TFAP2B, TNNI3K, MC4R, GPR61, LMX1B and OLFM4 associated with adult body mass index or childhood obesity. We identified three novel loci: rs13253111 near ELP3, rs8092503 near RAB27B and rs13387838 near ADAM23. Per additional risk allele, body mass index increased 0.04 Standard Deviation Score (SDS) [Standard Error (SE) 0.007], 0.05 SDS (SE 0.008) and 0.14 SDS (SE 0.025), for rs13253111, rs8092503 and rs13387838, respectively. A genetic risk score combining all 15 SNPs showed that each additional average risk allele was associated with a 0.073 SDS (SE 0.011, P-value = 3.12 × 10−10) increase in childhood body mass index in a population of 1955 children. This risk score explained 2% of the variance in childhood body mass index. This study highlights the shared genetic background between childhood and adult body mass index and adds three novel loci. These loci likely represent age-related differences in strength of the associations with body mass index. PMID:26604143
Effect of single-dose Ginkgo biloba and Panax ginseng on driving performance.
LaSala, Gregory S; McKeever, Rita G; Patel, Urvi; Okaneku, Jolene; Vearrier, David; Greenberg, Michael I
2015-02-01
Panax ginseng and Gingko biloba are commonly used herbal supplements in the United States that have been reported to increase alertness and cognitive function. The objective of this study was to investigate the effects of these specific herbals on driving performance. 30 volunteers were tested using the STISIM3® Driving Simulator (Systems Technology Inc., Hawthorne, CA, USA) in this double-blind, placebo-controlled study. The subjects were randomized into 3 groups of 10 subjects per group. After 10-min of simulated driving, subjects received either ginseng (1200 mg), Gingko (240 mg), or placebo administered orally. The test herbals and placebo were randomized and administered by a research assistant outside of the study to maintain blinding. One hour following administration of the herbals or placebo, the subjects completed an additional 10-min of simulated driving. Standard driving parameters were studied including reaction time, standard deviation of lateral positioning, and divided attention. Data collected for the divided attention parameter included time to response and number of correct responses. The data was analyzed with repeated-measures analysis of variance (ANOVA) and Kruskal-Wallis test using SPSS 22 (IBM, Armonk, NY, USA). There was no difference in reaction time or standard deviation of lateral positioning for both the ginseng and Ginkgo arms. For the divided attention parameter, the response time in the Ginkgo arm decreased from 2.9 to 2.5 s. The ginseng arm also decreased from 3.2 to 2.4 s. None of these values were statistically significant when between group differences were analyzed. The data suggests there was no statistically significant difference between ginseng, Ginkgo or placebo on driving performance. We postulate this is due to the relatively small numbers in our study. Further study with a larger sample size may be needed in order to elucidate more fully the effects of Ginkgo and ginseng on driving ability.
Consideration of kaolinite interference correction for quartz measurements in coal mine dust.
Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin
2013-01-01
Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.
Felix, Janine F; Bradfield, Jonathan P; Monnereau, Claire; van der Valk, Ralf J P; Stergiakouli, Evie; Chesi, Alessandra; Gaillard, Romy; Feenstra, Bjarke; Thiering, Elisabeth; Kreiner-Møller, Eskil; Mahajan, Anubha; Pitkänen, Niina; Joro, Raimo; Cavadino, Alana; Huikari, Ville; Franks, Steve; Groen-Blokhuis, Maria M; Cousminer, Diana L; Marsh, Julie A; Lehtimäki, Terho; Curtin, John A; Vioque, Jesus; Ahluwalia, Tarunveer S; Myhre, Ronny; Price, Thomas S; Vilor-Tejedor, Natalia; Yengo, Loïc; Grarup, Niels; Ntalla, Ioanna; Ang, Wei; Atalay, Mustafa; Bisgaard, Hans; Blakemore, Alexandra I; Bonnefond, Amelie; Carstensen, Lisbeth; Eriksson, Johan; Flexeder, Claudia; Franke, Lude; Geller, Frank; Geserick, Mandy; Hartikainen, Anna-Liisa; Haworth, Claire M A; Hirschhorn, Joel N; Hofman, Albert; Holm, Jens-Christian; Horikoshi, Momoko; Hottenga, Jouke Jan; Huang, Jinyan; Kadarmideen, Haja N; Kähönen, Mika; Kiess, Wieland; Lakka, Hanna-Maaria; Lakka, Timo A; Lewin, Alexandra M; Liang, Liming; Lyytikäinen, Leo-Pekka; Ma, Baoshan; Magnus, Per; McCormack, Shana E; McMahon, George; Mentch, Frank D; Middeldorp, Christel M; Murray, Clare S; Pahkala, Katja; Pers, Tune H; Pfäffle, Roland; Postma, Dirkje S; Power, Christine; Simpson, Angela; Sengpiel, Verena; Tiesler, Carla M T; Torrent, Maties; Uitterlinden, André G; van Meurs, Joyce B; Vinding, Rebecca; Waage, Johannes; Wardle, Jane; Zeggini, Eleftheria; Zemel, Babette S; Dedoussis, George V; Pedersen, Oluf; Froguel, Philippe; Sunyer, Jordi; Plomin, Robert; Jacobsson, Bo; Hansen, Torben; Gonzalez, Juan R; Custovic, Adnan; Raitakari, Olli T; Pennell, Craig E; Widén, Elisabeth; Boomsma, Dorret I; Koppelman, Gerard H; Sebert, Sylvain; Järvelin, Marjo-Riitta; Hyppönen, Elina; McCarthy, Mark I; Lindi, Virpi; Harri, Niinikoski; Körner, Antje; Bønnelykke, Klaus; Heinrich, Joachim; Melbye, Mads; Rivadeneira, Fernando; Hakonarson, Hakon; Ring, Susan M; Smith, George Davey; Sørensen, Thorkild I A; Timpson, Nicholas J; Grant, Struan F A; Jaddoe, Vincent W V
2016-01-15
A large number of genetic loci are associated with adult body mass index. However, the genetics of childhood body mass index are largely unknown. We performed a meta-analysis of genome-wide association studies of childhood body mass index, using sex- and age-adjusted standard deviation scores. We included 35 668 children from 20 studies in the discovery phase and 11 873 children from 13 studies in the replication phase. In total, 15 loci reached genome-wide significance (P-value < 5 × 10(-8)) in the joint discovery and replication analysis, of which 12 are previously identified loci in or close to ADCY3, GNPDA2, TMEM18, SEC16B, FAIM2, FTO, TFAP2B, TNNI3K, MC4R, GPR61, LMX1B and OLFM4 associated with adult body mass index or childhood obesity. We identified three novel loci: rs13253111 near ELP3, rs8092503 near RAB27B and rs13387838 near ADAM23. Per additional risk allele, body mass index increased 0.04 Standard Deviation Score (SDS) [Standard Error (SE) 0.007], 0.05 SDS (SE 0.008) and 0.14 SDS (SE 0.025), for rs13253111, rs8092503 and rs13387838, respectively. A genetic risk score combining all 15 SNPs showed that each additional average risk allele was associated with a 0.073 SDS (SE 0.011, P-value = 3.12 × 10(-10)) increase in childhood body mass index in a population of 1955 children. This risk score explained 2% of the variance in childhood body mass index. This study highlights the shared genetic background between childhood and adult body mass index and adds three novel loci. These loci likely represent age-related differences in strength of the associations with body mass index. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Third molar development by measurements of open apices in an Italian sample of living subjects.
De Luca, Stefano; Pacifici, Andrea; Pacifici, Luciano; Polimeni, Antonella; Fischetto, Sara Giulia; Velandia Palacio, Luz Andrea; Vanin, Stefano; Cameriere, Roberto
2016-02-01
The aim of this study is to analyse the age-predicting performance of third molar index (I3M) in dental age estimation. A multiple regression analysis was developed with chronological age as the independent variable. In order to investigate the relationship between the I3M and chronological age, the standard deviation and relative error were examined. Digitalized orthopantomographs (OPTs) of 975 Italian healthy subjects (531 female and 444 male), aged between 9 and 22 years, were studied. Third molar development was determined according to Cameriere et al. (2008). Analysis of covariance (ANCOVA) was applied to study the interaction between I3M and the gender. The difference between age and third molar index (I3M) was tested with Pearson's correlation coefficient. The I3M, the age and the gender of the subjects were used as predictive variable for age estimation. The small F-value for the gender (F = 0.042, p = 0.837) reveals that this factor does not affect the growth of the third molar. Adjusted R(2) (AdjR(2)) was used as parameter to define the best fitting function. All the regression models (linear, exponential, and polynomial) showed a similar AdjR(2). The polynomial (2nd order) fitting explains about the 78% of the total variance and do not add any relevant clinical information to the age estimation process from the third molar. The standard deviation and relative error increase with the age. The I3M has its minimum in the younger group of studied individuals and its maximum in the oldest ones, indicating that its precision and reliability decrease with the age. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Modelling of aortic aneurysm and aortic dissection through 3D printing.
Ho, Daniel; Squelch, Andrew; Sun, Zhonghua
2017-03-01
The aim of this study was to assess if the complex anatomy of aortic aneurysm and aortic dissection can be accurately reproduced from a contrast-enhanced computed tomography (CT) scan into a three-dimensional (3D) printed model. Contrast-enhanced cardiac CT scans from two patients were post-processed and produced as 3D printed thoracic aorta models of aortic aneurysm and aortic dissection. The transverse diameter was measured at five anatomical landmarks for both models, compared across three stages: the original contrast-enhanced CT images, the stereolithography (STL) format computerised model prepared for 3D printing and the contrast-enhanced CT of the 3D printed model. For the model with aortic dissection, measurements of the true and false lumen were taken and compared at two points on the descending aorta. Three-dimensional printed models were generated with strong and flexible plastic material with successful replication of anatomical details of aortic structures and pathologies. The mean difference in transverse vessel diameter between the contrast-enhanced CT images before and after 3D printing was 1.0 and 1.2 mm, for the first and second models respectively (standard deviation: 1.0 mm and 0.9 mm). Additionally, for the second model, the mean luminal diameter difference between the 3D printed model and CT images was 0.5 mm. Encouraging results were achieved with regards to reproducing 3D models depicting aortic aneurysm and aortic dissection. Variances in vessel diameter measurement outside a standard deviation of 1 mm tolerance indicate further work is required into the assessment and accuracy of 3D model reproduction. © 2017 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
Depression and Oxidative Stress: Results From a Meta-Analysis of Observational Studies
Palta, Priya; Samuel, Laura J.; Miller, Edgar R.; Szanton, Sarah L.
2014-01-01
Objective To perform a systematic review and meta-analysis that quantitatively tests and summarizes the hypothesis that depression results in elevated oxidative stress and lower antioxidant levels. Methods We performed a meta-analysis of studies that reported an association between depression and oxidative stress and/or antioxidant status markers. PubMed and EMBASE databases were searched for articles published from January 1980 through December 2012. A random-effects model, weighted by inverse variance, was performed to pool standard deviation (Cohen’s d) effect size estimates across studies for oxidative stress and antioxidant status measures, separately. Results Twenty-three studies with 4980 participants were included in the meta-analysis. Depression was most commonly measured using the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria. A Cohen’s d effect size of 0.55 (95% confidence interval = 0.47–0.63) was found for the association between depression and oxidative stress, indicating a roughly 0.55 of 1-standard-deviation increase in oxidative stress among individuals with depression compared with those without depression. The results of the studies displayed significant heterogeneity (I2 = 80.0%, p < .001). A statistically significant effect was also observed for the association between depression and antioxidant status markers (Cohen’s d = −0.24, 95% confidence interval = −0.33 to −0.15). Conclusions This meta-analysis observed an association between depression and oxidative stress and antioxidant status across many different studies. Differences in measures of depression and markers of oxidative stress and antioxidant status markers could account for the observed heterogeneity. These findings suggest that well-established associations between depression and poor heath outcomes may be mediated by high oxidative stress. PMID:24336428
Depression and oxidative stress: results from a meta-analysis of observational studies.
Palta, Priya; Samuel, Laura J; Miller, Edgar R; Szanton, Sarah L
2014-01-01
To perform a systematic review and meta-analysis that quantitatively tests and summarizes the hypothesis that depression results in elevated oxidative stress and lower antioxidant levels. We performed a meta-analysis of studies that reported an association between depression and oxidative stress and/or antioxidant status markers. PubMed and EMBASE databases were searched for articles published from January 1980 through December 2012. A random-effects model, weighted by inverse variance, was performed to pool standard deviation (Cohen's d) effect size estimates across studies for oxidative stress and antioxidant status measures, separately. Twenty-three studies with 4980 participants were included in the meta-analysis. Depression was most commonly measured using the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria. A Cohen's d effect size of 0.55 (95% confidence interval = 0.47-0.63) was found for the association between depression and oxidative stress, indicating a roughly 0.55 of 1-standard-deviation increase in oxidative stress among individuals with depression compared with those without depression. The results of the studies displayed significant heterogeneity (I(2) = 80.0%, p < .001). A statistically significant effect was also observed for the association between depression and antioxidant status markers (Cohen's d = -0.24, 95% confidence interval = -0.33 to -0.15). This meta-analysis observed an association between depression and oxidative stress and antioxidant status across many different studies. Differences in measures of depression and markers of oxidative stress and antioxidant status markers could account for the observed heterogeneity. These findings suggest that well-established associations between depression and poor heath outcomes may be mediated by high oxidative stress.
NASA Astrophysics Data System (ADS)
Liu, Jiping; Zhang, Zhanhai; Hu, Yongyun; Chen, Liqi; Dai, Yongjiu; Ren, Xiaobo
2008-05-01
The surface air temperature (SAT) over the Arctic Ocean in reanalyses and global climate model simulations was assessed using the International Arctic Buoy Programme/Polar Exchange at the Sea Surface (IABP/POLES) observations for the period 1979-1999. The reanalyses, including the National Centers for Environmental Prediction Reanalysis II (NCEP2) and European Centre for Medium-Range Weather Forecast 40-year Reanalysis (ERA40), show encouraging agreement with the IABP/POLES observations, although some spatiotemporal discrepancies are noteworthy. The reanalyses have warm annual mean biases and underestimate the observed interannual SAT variability in summer. Additionally, NCEP2 shows an excessive warming trend. Most model simulations (coordinated by the International Panel on Climate Change for its Fourth Assessment Report) reproduce the annual mean, seasonal cycle, and trend of the observed SAT reasonably well, particularly the multi-model ensemble mean. However, large discrepancies are found. Some models have the annual mean SAT biases far exceeding the standard deviation of the observed interannul SAT variability and the across-model standard deviation. Spatially, the largest inter-model variance of the annual mean SAT is found over the North Pole, Greenland Sea, Barents Sea and Baffin Bay. Seasonally, a large spread of the simulated SAT among the models is found in winter. The models show interannual variability and decadal trend of various amplitudes, and can not capture the observed dominant SAT mode variability and cooling trend in winter. Further discussions of the possible attributions to the identified SAT errors for some models suggest that the model's performance in the sea ice simulation is an important factor.
Holloway, Cpt Monica M; Jurina, Cpt Shannan L; Orszag, Cpt Joshua D; Bragdon, Lt George R; Green, Lt Rustin D; Garcia-Blanco, Jose C; Benham, Brian E; Adams, Ltc Timothy S; Johnson, Don
2016-01-01
To compare the effects of amiodarone administration by humerus intraosseous (HIO) and intravenous (IV) routes on return of spontaneous circulation (ROSC), time to maximum concentration (Tmax), maximum plasma drug concentration (Cmax), time to ROSC, and mean concentrations over time in a hypovolemic cardiac arrest model. Prospective, between subjects, randomized experimental design. TriService Research Facility. Yorkshire-cross swine (n = 28). Swine were anesthetized and placed into cardiac arrest. After 2 minutes, cardiopulmonary resuscitation was initiated. After an additional 2 minutes, amiodarone 300 mg was administered via the HIO or the IV route. Blood samples were collected over 5 minutes. The samples were analyzed using high-performance liquid chromatography tandem mass spectrometry. ROSC, Tmax, Cmax, time to ROSC, and mean concentrations over time. There was no difference in ROSC between the HIO and IV groups; each had five achieve ROSC and two that did not (p = 1). There was no difference in Tmax (p = 0.501) or in Cmax between HIO and IV groups (p = 0.232). Means ± standard deviations in seconds were 94.3 ± 78.3 compared to 115.7 ± 87.3 in the IV versus HIO groups, respectively. The mean ± standard deviation in nanograms per milliliter for the HIO was 49,041 ± 21,107 and 74,258 ± 33,176 for the IV group. There were no significant differences between the HIO and IV groups relative to time to ROSC (p = 0.220). A repeated analysis of variance indicated that there were no significant differences between the groups relative to concentrations over time (p > 0.05). The humerus intraosseous provides rapid and reliable access to administer life-saving medications during cardiac arrest.
Park, Eliza M; Deal, Allison M; Yopp, Justin M; Edwards, Teresa; Resnick, Samuel J; Song, Mi-Kyung; Nakamura, Zev M; Rosenstein, Donald L
2018-05-06
Cancer is a leading cause of death among women of parenting age in the United States. Women living with advanced or incurable cancer who have dependent children experience high rates of depression and anxiety as well as unique parenting challenges. To the authors' knowledge, few studies to date have examined the parenting factors associated with health-related quality of life (HRQOL) in women with advanced cancer. The authors conducted a cross-sectional, Web-based survey of the psychosocial concerns of 224 women with a tumor-node-metastasis staging system of the AJCC stage IV solid tumor malignancy who had at least 1 child aged <18 years. Participants completed validated measures of HRQOL (Functional Assessment of Cancer Therapy-General [FACT-G]); depression and anxiety symptom severity; functional status; parenting concerns; and investigator-designed questions to assess demographic, communication, and parenting characteristics. Multiple linear regression models were estimated to identify factors associated with FACT-G total and subscale scores. The mean FACT-G score was 66 (standard deviation, 16). The mean Emotional Well-Being subscale scores were particularly low (13; standard deviation, 5). In multivariable linear regression models, parenting variables explained nearly 40% of the HRQOL model variance. In the fully adjusted model, parenting concerns and the absence of parental prognostic communication with children both were found to be significantly associated with HRQOL scores. For each 1-point increase in parenting concern severity, FACT-G scores decreased by 4 points (P = .003). Women with metastatic cancer who are parents of dependent children are at risk of high psychological distress and low HRQOL. Parenting factors may have a negative influence on HRQOL in this patient population. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.
[Challenges in building a surgical obesity center].
Fischer, L; El Zein, Z; Bruckner, T; Hünnemeyer, K; Rudofsky, G; Reichenberger, M; Schommer, K; Gutt, C N; Büchler, M W; Müller-Stich, B P
2014-04-01
It is estimated that approximately 1 million adults in Germany suffer from grade III obesity. The aim of this article is to describe the challenges faced when constructing an operative obesity center. The inflow of patients as well as personnel and infrastructure of the interdisciplinary Diabetes and Obesity Center in Heidelberg were analyzed. The distribution of continuous data was described by mean values and standard deviation and analyzed using variance analysis. The interdisciplinary Diabetes and Obesity Center in Heidelberg was founded in 2006 and offers conservative therapeutic treatment and all currently available operative procedures. For every operative intervention carried out an average of 1.7 expert reports and 0.3 counter expertises were necessary. The time period from the initial presentation of patients in the department of surgery to an operation was on average 12.8 months (standard deviation SD ± 4.5 months). The 47 patients for whom remuneration for treatment was initially refused had an average body mass index (BMI) of 49.2 kg/m(2) and of these 39 had at least the necessity for treatment of a comorbidity. Of the 45 patients for whom the reason for the refusal of treatment costs was given as a lack of conservative treatment, 30 had undertaken a medically supervised attempt at losing weight over at least 6 months. Additionally, 19 of these patients could document participation in a course at a rehabilitation center, a Xenical® or Reduktil® therapy or had undertaken the Optifast® program. For the 20 patients who supposedly lacked a psychosomatic evaluation, an adequate psychosomatic evaluation was carried out in all cases. The establishment of an operative obesity center can last for several years. A essential prerequisite for success seems to be the constructive and targeted cooperation with the health insurance companies.
Duration of breast feeding and language ability in middle childhood.
Whitehouse, Andrew J O; Robinson, Monique; Li, Jianghong; Oddy, Wendy H
2011-01-01
There is controversy over whether increased breast-feeding duration has long-term benefits for language development. The current study examined whether the positive associations of breast feeding on language ability at age 5 years in the Western Australian Pregnancy (Raine) Cohort, were still present at age 10 years. The Raine Study is a longitudinal study of 2868 liveborn children recruited at approximately 18 weeks gestation. Breast-feeding data were based upon information prospectively collected during infancy, and were summarised according to four categories of breast-feeding duration: (1) never breast-fed, (2) breast-fed predominantly for <4 months, (3) breast-fed predominantly for 4-6 months, and (4) breast-fed predominantly for >6 months. Language ability was assessed in 1195 children at the 10 year follow-up (mean age = 10.58 years; standard deviation = 0.19) using the Peabody Picture Vocabulary Test - Revised (PPVT-R), which is based around a mean of 100 and a standard deviation of 15. Associations between breast-feeding duration and PPVT-R scores were assessed before and after adjustment for a range of sociodemographic, obstetric and psychosocial covariates. Analysis of variance revealed a strong positive association between the duration of predominant breast feeding and PPVT-R at age 10 years. A multivariable linear regression analysis adjusted for covariates and found that children who were predominantly breast-fed for >6 months had a mean PPVT-R score that was 4.04 points higher than children who were never breast-fed. This compared with an increase of 3.56 points at age 5 years. Breast feeding for longer periods in early life has a positive and statistically-independent effect on language development in middle childhood. © 2010 Blackwell Publishing Ltd.
Nutrient profiles of vegetarian and nonvegetarian dietary patterns.
Rizzo, Nico S; Jaceldo-Siegl, Karen; Sabate, Joan; Fraser, Gary E
2013-12-01
Differences in nutrient profiles between vegetarian and nonvegetarian dietary patterns reflect nutritional differences that can contribute to the development of disease. Our aim was to compare nutrient intakes between dietary patterns characterized by consumption or exclusion of meat and dairy products. We conducted a cross-sectional study of 71,751 subjects (mean age=59 years) from the Adventist Health Study 2. Data were collected between 2002 and 2007. Participants completed a 204-item validated semi-quantitative food frequency questionnaire. Dietary patterns compared were nonvegetarian, semi-vegetarian, pesco vegetarian, lacto-ovo vegetarian, and strict vegetarian. Analysis of covariance was used to analyze differences in nutrient intakes by dietary patterns and was adjusted for age, sex, and race. Body mass index and other relevant demographic data were reported and compared by dietary pattern using χ(2) tests and analysis of variance. Many nutrient intakes varied significantly between dietary patterns. Nonvegetarians had the lowest intakes of plant proteins, fiber, beta carotene, and magnesium compared with those following vegetarian dietary patterns, and the highest intakes of saturated, trans, arachidonic, and docosahexaenoic fatty acids. The lower tails of some nutrient distributions in strict vegetarians suggested inadequate intakes by a portion of the subjects. Energy intake was similar among dietary patterns at close to 2,000 kcal/day, with the exception of semi-vegetarians, who had an intake of 1,707 kcal/day. Mean body mass index was highest in nonvegetarians (mean=28.7 [standard deviation=6.4]) and lowest in strict vegetarians (mean=24.0 [standard deviation=4.8]). Nutrient profiles varied markedly among dietary patterns that were defined by meat and dairy intakes. These differences are of interest in the etiology of obesity and chronic diseases. Copyright © 2013 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Cespedes, Elizabeth M; Hu, Frank B; Redline, Susan; Rosner, Bernard; Alcantara, Carmela; Cai, Jianwen; Hall, Martica H; Loredo, Jose S; Mossavar-Rahmani, Yasmin; Ramos, Alberto R; Reid, Kathryn J; Shah, Neomi A; Sotres-Alvarez, Daniela; Zee, Phyllis C; Wang, Rui; Patel, Sanjay R
2016-03-15
Most studies of sleep and health outcomes rely on self-reported sleep duration, although correlation with objective measures is poor. In this study, we defined sociodemographic and sleep characteristics associated with misreporting and assessed whether accounting for these factors better explains variation in objective sleep duration among 2,086 participants in the Hispanic Community Health Study/Study of Latinos who completed more than 5 nights of wrist actigraphy and reported habitual bed/wake times from 2010 to 2013. Using linear regression, we examined self-report as a predictor of actigraphy-assessed sleep duration. Mean amount of time spent asleep was 7.85 (standard deviation, 1.12) hours by self-report and 6.74 (standard deviation, 1.02) hours by actigraphy; correlation between them was 0.43. For each additional hour of self-reported sleep, actigraphy time spent asleep increased by 20 minutes (95% confidence interval: 19, 22). Correlations between self-reported and actigraphy-assessed time spent asleep were lower with male sex, younger age, sleep efficiency <85%, and night-to-night variability in sleep duration ≥1.5 hours. Adding sociodemographic and sleep factors to self-reports increased the proportion of variance explained in actigraphy-assessed sleep slightly (18%-32%). In this large validation study including Hispanics/Latinos, we demonstrated a moderate correlation between self-reported and actigraphy-assessed time spent asleep. The performance of self-reports varied by demographic and sleep measures but not by Hispanic subgroup. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
De Luca, Stefano; Mangiulli, Tatiana; Merelli, Vera; Conforti, Federica; Velandia Palacio, Luz Andrea; Agostini, Susanna; Spinas, Enrico; Cameriere, Roberto
2016-04-01
The aim of this study is to develop a specific formula for the purpose of assessing skeletal age in a sample of Italian growing infants and children by measuring carpals and epiphyses of radio and ulna. A sample of 332 X-rays of left hand-wrist bones (130 boys and 202 girls), aged between 1 and 16 years, was analyzed retrospectively. Analysis of covariance (ANCOVA) was applied to study how sex affects the growth of the ratio Bo/Ca in the boys and girls groups. The regression model, describing age as a linear function of sex and the Bo/Ca ratio for the new Italian sample, yielded the following formula: Age = -1.7702 + 1.0088 g + 14.8166 (Bo/Ca). This model explained 83.5% of total variance (R(2) = 0.835). The median of the absolute values of residuals (observed age minus predicted age) was -0.38, with a quartile deviation of 2.01 and a standard error of estimate of 1.54. A second sample test of 204 Italian children (108 girls and 96 boys), aged between 1 and 16 years, was used to evaluate the accuracy of the specific regression model. A sample paired t-test was used to analyze the mean differences between the skeletal and chronological age. The mean error for girls is 0.00 and the estimated age is slightly underestimated in boys with a mean error of -0.30 years. The standard deviations are 0.70 years for girls and 0.78 years for boys. The obtained results indicate that there is a high relationship between estimated and chronological ages. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Prognostic implications of mutation-specific QTc standard deviation in congenital long QT syndrome.
Mathias, Andrew; Moss, Arthur J; Lopes, Coeli M; Barsheshet, Alon; McNitt, Scott; Zareba, Wojciech; Robinson, Jennifer L; Locati, Emanuela H; Ackerman, Michael J; Benhorin, Jesaia; Kaufman, Elizabeth S; Platonov, Pyotr G; Qi, Ming; Shimizu, Wataru; Towbin, Jeffrey A; Michael Vincent, G; Wilde, Arthur A M; Zhang, Li; Goldenberg, Ilan
2013-05-01
Individual corrected QT interval (QTc) may vary widely among carriers of the same long QT syndrome (LQTS) mutation. Currently, neither the mechanism nor the implications of this variable penetrance are well understood. To hypothesize that the assessment of QTc variance in patients with congenital LQTS who carry the same mutation provides incremental prognostic information on the patient-specific QTc. The study population comprised 1206 patients with LQTS with 95 different mutations and ≥ 5 individuals who carry the same mutation. Multivariate Cox proportional hazards regression analysis was used to assess the effect of mutation-specific standard deviation of QTc (QTcSD) on the risk of cardiac events (comprising syncope, aborted cardiac arrest, and sudden cardiac death) from birth through age 40 years in the total population and by genotype. Assessment of mutation-specific QTcSD showed large differences among carriers of the same mutations (median QTcSD 45 ms). Multivariate analysis showed that each 20 ms increment in QTcSD was associated with a significant 33% (P = .002) increase in the risk of cardiac events after adjustment for the patient-specific QTc duration and the family effect on QTc. The risk associated with QTcSD was pronounced among patients with long QT syndrome type 1 (hazard ratio 1.55 per 20 ms increment; P<.001), whereas among patients with long QT syndrome type 2, the risk associated with QTcSD was not statistically significant (hazard ratio 0.99; P = .95; P value for QTcSD-by-genotype interaction = .002). Our findings suggest that mutations with a wider variation in QTc duration are associated with increased risk of cardiac events. These findings appear to be genotype-specific, with a pronounced effect among patients with the long QT syndrome type 1 genotype. Copyright © 2013. Published by Elsevier Inc.
Timing of Emergency Medicine Student Evaluation Does Not Affect Scoring.
Hiller, Katherine M; Waterbrook, Anna; Waters, Kristina
2016-02-01
Evaluation of medical students rotating through the emergency department (ED) is an important formative and summative assessment method. Intuitively, delaying evaluation should affect the reliability of this assessment method, however, the effect of evaluation timing on scoring is unknown. A quality-improvement project evaluating the timing of end-of-shift ED evaluations at the University of Arizona was performed to determine whether delay in evaluation affected the score. End-of-shift ED evaluations completed on behalf of fourth-year medical students from July 2012 to March 2013 were reviewed. Forty-seven students were evaluated 547 times by 46 residents and attendings. Evaluation scores were means of anchored Likert scales (1-5) for the domains of energy/interest, fund of knowledge, judgment/problem-solving ability, clinical skills, personal effectiveness, and systems-based practice. Date of shift, date of evaluation, and score were collected. Linear regression was performed to determine whether timing of the evaluation had an effect on evaluation score. Data were complete for 477 of 547 evaluations (87.2%). Mean evaluation score was 4.1 (range 2.3-5, standard deviation 0.62). Evaluations took a mean of 8.5 days (median 4 days, range 0-59 days, standard deviation 9.77 days) to complete. Delay in evaluation had no significant effect on score (p = 0.983). The evaluation score was not affected by timing of the evaluation. Variance in scores was similar for both immediate and delayed evaluations. Considerable amounts of time and energy are expended tracking down delayed evaluations. This activity does not impact a student's final grade. Copyright © 2016 Elsevier Inc. All rights reserved.
Zavgorodni, S
2004-12-07
Inter-fraction dose fluctuations, which appear as a result of setup errors, organ motion and treatment machine output variations, may influence the radiobiological effect of the treatment even when the total delivered physical dose remains constant. The effect of these inter-fraction dose fluctuations on the biological effective dose (BED) has been investigated. Analytical expressions for the BED accounting for the dose fluctuations have been derived. The concept of biological effective constant dose (BECD) has been introduced. The equivalent constant dose (ECD), representing the constant physical dose that provides the same cell survival fraction as the fluctuating dose, has also been introduced. The dose fluctuations with Gaussian as well as exponential probability density functions were investigated. The values of BECD and ECD calculated analytically were compared with those derived from Monte Carlo modelling. The agreement between Monte Carlo modelled and analytical values was excellent (within 1%) for a range of dose standard deviations (0-100% of the dose) and the number of fractions (2 to 37) used in the comparison. The ECDs have also been calculated for conventional radiotherapy fields. The analytical expression for the BECD shows that BECD increases linearly with the variance of the dose. The effect is relatively small, and in the flat regions of the field it results in less than 1% increase of ECD. In the penumbra region of the 6 MV single radiotherapy beam the ECD exceeded the physical dose by up to 35%, when the standard deviation of combined patient setup/organ motion uncertainty was 5 mm. Equivalently, the ECD field was approximately 2 mm wider than the physical dose field. The difference between ECD and the physical dose is greater for normal tissues than for tumours.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Correia, C.; De Medeiros, J. R.; Burkhart, B.
2014-04-10
We study how the estimation of the sonic Mach number (M{sub s} ) from {sup 13}CO linewidths relates to the actual three-dimensional sonic Mach number. For this purpose we analyze MHD simulations that include post-processing to take radiative transfer effects into account. As expected, we find very good agreement between the linewidth estimated sonic Mach number and the actual sonic Mach number of the simulations for optically thin tracers. However, we find that opacity broadening causes M{sub s} to be overestimated by a factor of ≈1.16-1.3 when calculated from optically thick {sup 13}CO lines. We also find that there ismore » a dependence on the magnetic field: super-Alfvénic turbulence shows increased line broadening compared with sub-Alfvénic turbulence for all values of optical depth for supersonic turbulence. Our results have implications for the observationally derived sonic Mach number-density standard deviation (σ{sub ρ/(ρ)}) relationship, σ{sub ρ/〈ρ〉}{sup 2}=b{sup 2}M{sub s}{sup 2}, and the related column density standard deviation (σ {sub N/(N)}) sonic Mach number relationship. In particular, we find that the parameter b, as an indicator of solenoidal versus compressive driving, will be underestimated as a result of opacity broadening. We compare the σ {sub N/(N)}-M{sub s} relation derived from synthetic dust extinction maps and {sup 13}CO linewidths with recent observational studies and find that solenoidally driven MHD turbulence simulations have values of σ {sub N/(N)}which are lower than real molecular clouds. This may be due to the influence of self-gravity which should be included in simulations of molecular cloud dynamics.« less
NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
Genome-wide association study for ketosis in US Jerseys using producer-recorded data.
Parker Gaddis, K L; Megonigal, J H; Clay, J S; Wolfe, C W
2018-01-01
Ketosis is one of the most frequently reported metabolic health events in dairy herds. Several genetic analyses of ketosis in dairy cattle have been conducted; however, few have focused specifically on Jersey cattle. The objectives of this research included estimating variance components for susceptibility to ketosis and identification of genomic regions associated with ketosis in Jersey cattle. Voluntary producer-recorded health event data related to ketosis were available from Dairy Records Management Systems (Raleigh, NC). Standardization was implemented to account for the various acronyms used by producers to designate an incidence of ketosis. Events were restricted to the first reported incidence within 60 d after calving in first through fifth parities. After editing, there were a total of 42,233 records from 23,865 cows. A total of 1,750 genotyped animals were used for genomic analyses using 60,671 markers. Because of the binary nature of the trait, a threshold animal model was fitted using THRGIBBS1F90 (version 2.110) using only pedigree information, and genomic information was incorporated using a single-step genomic BLUP approach. Individual single nucleotide polymorphism (SNP) effects and the proportion of variance explained by 10-SNP windows were calculated using postGSf90 (version 1.38). Heritability of susceptibility to ketosis was 0.083 [standard deviation (SD) = 0.021] and 0.078 (SD = 0.018) in pedigree-based and genomic analyses, respectively. The marker with the largest associated effect was located on chromosome 10 at 66.3 Mbp. The 10-SNP window explaining the largest proportion of variance (0.70%) was located on chromosome 6 beginning at 56.1 Mbp. Gene Ontology (GO) and Medical Subject Heading (MeSH) enrichment analyses identified several overrepresented processes and terms related to immune function. Our results indicate that there is a genetic component related to ketosis susceptibility in Jersey cattle and, as such, genetic selection for improved resistance to ketosis is feasible. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Visentin, G; Penasa, M; Gottardo, P; Cassandro, M; De Marchi, M
2016-10-01
Milk minerals and coagulation properties are important for both consumers and processors, and they can aid in increasing milk added value. However, large-scale monitoring of these traits is hampered by expensive and time-consuming reference analyses. The objective of the present study was to develop prediction models for major mineral contents (Ca, K, Mg, Na, and P) and milk coagulation properties (MCP: rennet coagulation time, curd-firming time, and curd firmness) using mid-infrared spectroscopy. Individual milk samples (n=923) of Holstein-Friesian, Brown Swiss, Alpine Grey, and Simmental cows were collected from single-breed herds between January and December 2014. Reference analysis for the determination of both mineral contents and MCP was undertaken with standardized methods. For each milk sample, the mid-infrared spectrum in the range from 900 to 5,000cm(-1) was stored. Prediction models were calibrated using partial least squares regression coupled with a wavenumber selection technique called uninformative variable elimination, to improve model accuracy, and validated both internally and externally. The average reduction of wavenumbers used in partial least squares regression was 80%, which was accompanied by an average increment of 20% of the explained variance in external validation. The proportion of explained variance in external validation was about 70% for P, K, Ca, and Mg, and it was lower (40%) for Na. Milk coagulation properties prediction models explained between 54% (rennet coagulation time) and 56% (curd-firming time) of the total variance in external validation. The ratio of standard deviation of each trait to the respective root mean square error of prediction, which is an indicator of the predictive ability of an equation, suggested that the developed models might be effective for screening and collection of milk minerals and coagulation properties at the population level. Although prediction equations were not accurate enough to be proposed for analytic purposes, mid-infrared spectroscopy predictions could be evaluated as phenotypic information to genetically improve milk minerals and MCP on a large scale. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Sleep Duration and Area-Level Deprivation in Twins.
Watson, Nathaniel F; Horn, Erin; Duncan, Glen E; Buchwald, Dedra; Vitiello, Michael V; Turkheimer, Eric
2016-01-01
We used quantitative genetic models to assess whether area-level deprivation as indicated by the Singh Index predicts shorter sleep duration and modifies its underlying genetic and environmental contributions. Participants were 4,218 adult twin pairs (2,377 monozygotic and 1,841 dizygotic) from the University of Washington Twin Registry. Participants self-reported habitual sleep duration. The Singh Index was determined by linking geocoding addresses to 17 indicators at the census-tract level using data from Census of Washington State and Census Tract Cartographic Boundary Files from 2000 and 2010. Data were analyzed using univariate and bivariate genetic decomposition and quantitative genetic interaction models that assessed A (additive genetics), C (common environment), and E (unique environment) main effects of the Singh Index on sleep duration and allowed the magnitude of residual ACE variance components in sleep duration to vary with the Index. The sample had a mean age of 38.2 y (standard deviation [SD] = 18), and was predominantly female (62%) and Caucasian (91%). Mean sleep duration was 7.38 h (SD = 1.20) and the mean Singh Index score was 0.00 (SD = 0.89). The heritability of sleep duration was 39% and the Singh Index was 12%. The uncontrolled phenotypic regression of sleep duration on the Singh Index showed a significant negative relationship between area-level deprivation and sleep length (b = -0.080, P < 0.001). Every 1 SD in Singh Index was associated with a ∼4.5 min change in sleep duration. For the quasi-causal bivariate model, there was a significant main effect of E (b(0E) = -0.063; standard error [SE] = 0.30; P < 0.05). Residual variance components unique to sleep duration were significant for both A (b(0Au) = 0.734; SE = 0.020; P < 0.001) and E (b(0Eu) = 0.934; SE = 0.013; P < 0.001). Area-level deprivation has a quasi-causal association with sleep duration, with greater deprivation being related to shorter sleep. As area-level deprivation increases, unique genetic and nonshared environmental residual variance in sleep duration increases. © 2016 Associated Professional Sleep Societies, LLC.
Farrell, Mary Beth
2018-06-01
This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being measured. A wide confidence interval indicates that if the experiment were repeated multiple times on other samples, the measured statistic would lie within a wide range of possibilities. The confidence interval relies on the SE. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Halboub, Esam; Dhaifullah, Esam; Yasin, Rasha
2013-11-01
To evaluate the dental health status and toothbrushing behavior among Sana'a University students, and to explore any associations with different factors. In this cross-sectional study, the dental health of 360 students from the dental, medical, and literature faculties (120 each) at Sana'a University were examined using the Decayed, Missing, and Filled Teeth (DMFT) index. Data regarding study field, grade, toothbrushing behavior, parents' education, and smoking and khat chewing habits were recorded. Nearly 76% of students (n = 273) reported regularly brushing their teeth. Excluding fathers' education levels and khat chewing, other factors (faculty, grade, sex, mothers' education, and smoking) were significant independent predictors for this behavior. The overall mean DMFT score (± standard deviation) was 4.13 ± 3.1, and was found to be adversely influenced by smoking, which explained only 1.1% of the variance. Toothbrushing, sex, and smoking were significant independent predictors for the decay score, and explained 10.6% of its variance. Khat chewing was found to be adversely associated with the missing score, with an influence of only 2.9%. The filling score was found to be positively associated with toothbrushing and study grade, which together had an influence of 10%. The dental health and toothbrushing behaviors of Sana'a University students are unsatisfactory, and influenced unequally by different factors. © 2013 Wiley Publishing Asia Pty Ltd.
Rasper, Michael; Nadjiri, Jonathan; Sträter, Alexandra S; Settles, Marcus; Laugwitz, Karl-Ludwig; Rummeny, Ernst J; Huber, Armin M
2017-06-01
To prospectively compare image quality and myocardial T 1 relaxation times of modified Look-Locker inversion recovery (MOLLI) imaging at 3.0 T (T) acquired with patient-adaptive dual-source (DS) and conventional single-source (SS) radiofrequency (RF) transmission. Pre- and post-contrast MOLLI T 1 mapping using SS and DS was acquired in 27 patients. Patient wise and segment wise analysis of T 1 times was performed. The correlation of DS MOLLI measurements with a reference spin echo sequence was analysed in phantom experiments. DS MOLLI imaging reduced T 1 standard deviation in 14 out of 16 myocardial segments (87.5%). Significant reduction of T 1 variance could be obtained in 7 segments (43.8%). DS significantly reduced myocardial T 1 variance in 16 out of 25 patients (64.0%). With conventional RF transmission, dielectric shading artefacts occurred in six patients causing diagnostic uncertainty. No according artefacts were found on DS images. DS image findings were in accordance with conventional T 1 mapping and late gadolinium enhancement (LGE) imaging. Phantom experiments demonstrated good correlation of myocardial T 1 time between DS MOLLI and spin echo imaging. Dual-source RF transmission enhances myocardial T 1 homogeneity in MOLLI imaging at 3.0 T. The reduction of signal inhomogeneities and artefacts due to dielectric shading is likely to enhance diagnostic confidence.
Lokant, M T; Naz, R K
2015-04-01
Prostate-specific antigen (PSA), produced by the prostate, liquefies post-ejaculate semen. PSA is detected in semen and blood. Increased circulating PSA levels indicate prostate abnormality [prostate cancer (PC), benign prostatic hyperplasia (BPH), prostatitis (PTIS)], with variance among individuals. As the prostate has been proposed as an immune organ, we hypothesise that variation in PSA levels among men may be due to presence of auto-antibodies against PSA. Sera from healthy men (n = 28) and men having prostatitis (n = 25), BPH (n = 30) or PC (n = 29) were tested for PSA antibody presence using enzyme-linked immunosorbent assay (ELISA) values converted to standard deviation (SD) units, and Western blotting. Taking ≥2 SD units as cut-off for positive immunoreactivity, 0% of normal men, 0% with prostatitis, 33% with BPH and 3.45% with PC demonstrated PSA antibodies. One-way analysis of variance (anova) performed on the mean absorbance values and SD units of each group showed BPH as significantly different (P < 0.01) compared with PC and prostatitis. All others were nonsignificant (P < 0.05). Men (33%) with BPH had PSA antibodies by ELISA and Western blot. These discoveries may find clinical application in differential diagnosis among prostate abnormalities, especially differentiating BPH from prostate cancer and prostatitis. © 2014 Blackwell Verlag GmbH.
Once upon Multivariate Analyses: When They Tell Several Stories about Biological Evolution.
Renaud, Sabrina; Dufour, Anne-Béatrice; Hardouin, Emilie A; Ledevin, Ronan; Auffray, Jean-Christophe
2015-01-01
Geometric morphometrics aims to characterize of the geometry of complex traits. It is therefore by essence multivariate. The most popular methods to investigate patterns of differentiation in this context are (1) the Principal Component Analysis (PCA), which is an eigenvalue decomposition of the total variance-covariance matrix among all specimens; (2) the Canonical Variate Analysis (CVA, a.k.a. linear discriminant analysis (LDA) for more than two groups), which aims at separating the groups by maximizing the between-group to within-group variance ratio; (3) the between-group PCA (bgPCA) which investigates patterns of between-group variation, without standardizing by the within-group variance. Standardizing within-group variance, as performed in the CVA, distorts the relationships among groups, an effect that is particularly strong if the variance is similarly oriented in a comparable way in all groups. Such shared direction of main morphological variance may occur and have a biological meaning, for instance corresponding to the most frequent standing genetic variation in a population. Here we undertake a case study of the evolution of house mouse molar shape across various islands, based on the real dataset and simulations. We investigated how patterns of main variance influence the depiction of among-group differentiation according to the interpretation of the PCA, bgPCA and CVA. Without arguing about a method performing 'better' than another, it rather emerges that working on the total or between-group variance (PCA and bgPCA) will tend to put the focus on the role of direction of main variance as line of least resistance to evolution. Standardizing by the within-group variance (CVA), by dampening the expression of this line of least resistance, has the potential to reveal other relevant patterns of differentiation that may otherwise be blurred.
Schulze, P.A.; Capel, P.D.; Squillace, P.J.; Helsel, D.R.
1993-01-01
The usefulness and sensitivity, of a portable immunoassay test for the semiquantitative field screening of water samples was evaluated by means of laboratory and field studies. Laboratory results indicated that the tests were useful for the determination of atrazine concentrations of 0.1 to 1.5 μg/L. At a concentration of 1 μg/L, the relative standard deviation in the difference between the regression line and the actual result was about 40 percent. The immunoassay was less sensitive and produced similar errors for other triazine herbicides. After standardization, the test results were relatively insensitive to ionic content and variations in pH (range, 4 to 10), mildly sensitive to temperature changes, and quite sensitive to the timing of the final incubation step, variances in timing can be a significant source of error. Almost all of the immunoassays predicted a higher atrazine concentration in water samples when compared to results of gas chromatography. If these tests are used as a semiquantitative screening tool, this tendency for overprediction does not diminish the tests' usefulness. Generally, the tests seem to be a valuable method for screening water samples for triazine herbicides.
Analysis of Statistical Methods Currently used in Toxicology Journals
Na, Jihye; Yang, Hyeri
2014-01-01
Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012
Analysis of Statistical Methods Currently used in Toxicology Journals.
Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min
2014-09-01
Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.
75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...
78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...
Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions
1981-02-01
the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levegruen, Sabine, E-mail: sabine.levegruen@uni-due.de; Poettgen, Christoph; Abu Jawad, Jehad
Purpose: To evaluate megavoltage computed tomography (MVCT)-based image guidance with helical tomotherapy in patients with vertebral tumors by analyzing factors influencing interobserver variability, considered as quality criterion of image guidance. Methods and Materials: Five radiation oncologists retrospectively registered 103 MVCTs in 10 patients to planning kilovoltage CTs by rigid transformations in 4 df. Interobserver variabilities were quantified using the standard deviations (SDs) of the distributions of the correction vector components about the observers' fraction mean. To assess intraobserver variabilities, registrations were repeated after {>=}4 weeks. Residual deviations after setup correction due to uncorrectable rotational errors and elastic deformations were determinedmore » at 3 craniocaudal target positions. To differentiate observer-related variations in minimizing these residual deviations across the 3-dimensional MVCT from image resolution effects, 2-dimensional registrations were performed in 30 single transverse and sagittal MVCT slices. Axial and longitudinal MVCT image resolutions were quantified. For comparison, image resolution of kilovoltage cone-beam CTs (CBCTs) and interobserver variability in registrations of 43 CBCTs were determined. Results: Axial MVCT image resolution is 3.9 lp/cm. Longitudinal MVCT resolution amounts to 6.3 mm, assessed as full-width at half-maximum of thin objects in MVCTs with finest pitch. Longitudinal CBCT resolution is better (full-width at half-maximum, 2.5 mm for CBCTs with 1-mm slices). In MVCT registrations, interobserver variability in the craniocaudal direction (SD 1.23 mm) is significantly larger than in the lateral and ventrodorsal directions (SD 0.84 and 0.91 mm, respectively) and significantly larger compared with CBCT alignments (SD 1.04 mm). Intraobserver variabilities are significantly smaller than corresponding interobserver variabilities (variance ratio [VR] 1.8-3.1). Compared with 3-dimensional registrations, 2-dimensional registrations have significantly smaller interobserver variability in the lateral and ventrodorsal directions (VR 3.8 and 2.8, respectively) but not in the craniocaudal direction (VR 0.75). Conclusion: Tomotherapy image guidance precision is affected by image resolution and residual deviations after setup correction. Eliminating the effect of residual deviations yields small interobserver variabilities with submillimeter precision in the axial plane. In contrast, interobserver variability in the craniocaudal direction is dominated by the poorer longitudinal MVCT image resolution. Residual deviations after image guidance exist and need to be considered when dose gradients ultimately achievable with image guided radiation therapy techniques are analyzed.« less
Levegrün, Sabine; Pöttgen, Christoph; Jawad, Jehad Abu; Berkovic, Katharina; Hepp, Rodrigo; Stuschke, Martin
2013-02-01
To evaluate megavoltage computed tomography (MVCT)-based image guidance with helical tomotherapy in patients with vertebral tumors by analyzing factors influencing interobserver variability, considered as quality criterion of image guidance. Five radiation oncologists retrospectively registered 103 MVCTs in 10 patients to planning kilovoltage CTs by rigid transformations in 4 df. Interobserver variabilities were quantified using the standard deviations (SDs) of the distributions of the correction vector components about the observers' fraction mean. To assess intraobserver variabilities, registrations were repeated after ≥4 weeks. Residual deviations after setup correction due to uncorrectable rotational errors and elastic deformations were determined at 3 craniocaudal target positions. To differentiate observer-related variations in minimizing these residual deviations across the 3-dimensional MVCT from image resolution effects, 2-dimensional registrations were performed in 30 single transverse and sagittal MVCT slices. Axial and longitudinal MVCT image resolutions were quantified. For comparison, image resolution of kilovoltage cone-beam CTs (CBCTs) and interobserver variability in registrations of 43 CBCTs were determined. Axial MVCT image resolution is 3.9 lp/cm. Longitudinal MVCT resolution amounts to 6.3 mm, assessed as full-width at half-maximum of thin objects in MVCTs with finest pitch. Longitudinal CBCT resolution is better (full-width at half-maximum, 2.5 mm for CBCTs with 1-mm slices). In MVCT registrations, interobserver variability in the craniocaudal direction (SD 1.23 mm) is significantly larger than in the lateral and ventrodorsal directions (SD 0.84 and 0.91 mm, respectively) and significantly larger compared with CBCT alignments (SD 1.04 mm). Intraobserver variabilities are significantly smaller than corresponding interobserver variabilities (variance ratio [VR] 1.8-3.1). Compared with 3-dimensional registrations, 2-dimensional registrations have significantly smaller interobserver variability in the lateral and ventrodorsal directions (VR 3.8 and 2.8, respectively) but not in the craniocaudal direction (VR 0.75). Tomotherapy image guidance precision is affected by image resolution and residual deviations after setup correction. Eliminating the effect of residual deviations yields small interobserver variabilities with submillimeter precision in the axial plane. In contrast, interobserver variability in the craniocaudal direction is dominated by the poorer longitudinal MVCT image resolution. Residual deviations after image guidance exist and need to be considered when dose gradients ultimately achievable with image guided radiation therapy techniques are analyzed. Copyright © 2013 Elsevier Inc. All rights reserved.
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...
A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.
McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B
2017-02-01
We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.
Aboal, J R; Boquete, M T; Carballeira, A; Casanova, A; Debén, S; Fernández, J A
2017-05-01
In this study we examined 6080 data gathered by our research group during more than 20 years of research on the moss biomonitoring technique, in order to quantify the variability generated by different aspects of the protocol and to calculate the overall measurement uncertainty associated with the technique. The median variance of the concentrations of different pollutants measured in moss tissues attributed to the different methodological aspects was high, reaching values of 2851 (ng·g -1 ) 2 for Cd (sample treatment), 35.1 (μg·g -1 ) 2 for Cu (sample treatment), 861.7 (ng·g -1 ) 2 and for Hg (material selection). These variances correspond to standard deviations that constitute 67, 126 and 59% the regional background levels of these elements in the study region. The overall measurement uncertainty associated with the worst experimental protocol (5 subsamples, refrigerated, washed, 5 × 5 m size of the sampling area and once a year sampling) was between 2 and 6 times higher than that associated with the optimal protocol (30 subsamples, dried, unwashed, 20 × 20 m size of the sampling area and once a week sampling), and between 1.5 and 7 times higher than that associated with the standardized protocol (30 subsamples and once a year sampling). The overall measurement uncertainty associated with the standardized protocol could generate variations of between 14 and 47% in the regional background levels of Cd, Cu, Hg, Pb and Zn in the study area and much higher levels of variation in polluted sampling sites. We demonstrated that although the overall measurement uncertainty of the technique is still high, it can be reduced by using already well defined aspects of the protocol. Further standardization of the protocol together with application of the information on the overall measurement uncertainty would improve the reliability and comparability of the results of different biomonitoring studies, thus extending use of the technique beyond the context of scientific research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hardman, John; Al-Hadithy, Nawfal; Hester, Thomas; Anakwe, Raymond
2015-12-01
There remains little consensus regarding the optimal management of distal radius fractures. Fixed angle volar devices have gained recent popularity, but have also been associated with soft tissue complications. Intramedullary (IM) devices offer fixed angle stabilisation with minimally invasive surgical technique and low, IM profile. No formal review of outcomes could be identified. We conducted a systematic review of clinical studies regarding the use of fixed angle IM devices in acute extra-articular or simple intra-articular distal radius fractures. Preferred Reporting Items for Systematic Reviews (PRISMA) guidance was followed. Numerical data regarding functional scores, ranges of movement, radiological outcomes and complications were pooled to produce aggregate means and standard deviation. A total of 310 titles and abstracts were identified. Fourteen papers remained for analysis. Total patient number was 357, mean age 63.72 years and mean follow-up 12.77 months. Mean functional scores were all rated as 'excellent'. Aggregate means: flexion 53.62°, extension 56.38°, pronation 69.10°, supination 70.29°, ulnar deviation 28.35°, radial deviation 18.12°, radial height 8.98 mm, radial inclination 16.51°, volar tilt 5.35°, ulnar variance 0.66 mm and grip strength 90.37 %. Overall complication rate was 19.6 %. Tendon rupture was unreported. Tendon irritation was 0.88 %. Radial nerve paraesthesia was 11.44 %. Fixed angle IM devices facilitate excellent functional outcomes, with radiological and clinical parameters at least equivalent to volar plate devices. Low rates of tendon irritation and absence of tendon rupture are advantageous. Significant limitations include a lack of application for complex articular injuries and the propensity to cause a transient neuritis of the superficial branch of the radial nerve.
Convex hulls of random walks in higher dimensions: A large-deviation study
NASA Astrophysics Data System (ADS)
Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.
2017-12-01
The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .
Hug, François; Drouet, Jean Marc; Champoux, Yvan; Couturier, Antoine; Dorel, Sylvain
2008-11-01
The aim of this study was to determine whether high inter-individual variability of the electromyographic (EMG) patterns during pedaling is accompanied by variability in the pedal force application patterns. Eleven male experienced cyclists were tested at two submaximal power outputs (150 and 250 W). Pedal force components (effective and total forces) and index of mechanical effectiveness were measured continuously using instrumented pedals and were synchronized with surface electromyography signals measured in ten lower limb muscles. The intersubject variability of EMG and mechanical patterns was assessed using standard deviation, mean deviation, variance ratio and coefficient of cross-correlation (_R(0), with lag time = 0). The results demonstrated a high intersubject variability of EMG patterns at both exercise intensities for biarticular muscles as a whole (and especially for Gastrocnemius lateralis and Rectus femoris) and for one monoarticular muscle (Tibialis anterior). However, this heterogeneity of EMG patterns is not accompanied by a so high intersubject variability in pedal force application patterns. A very low variability in the three mechanical profiles (effective force, total force and index of mechanical effectiveness) was obtained in the propulsive downstroke phase, although a greater variability in these mechanical patterns was found during upstroke and around the top dead center, and at 250 W when compared to 150 W. Overall, these results provide additional evidence for redundancy in the neuromuscular system.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-19
..., Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, VA 22209-3939. (4) Hand Delivery or Courier: MSHA, Office of Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350... CONTACT: Mario Distasio, Chief of the Economic Analysis Division, Office of Standards, Regulations, and...
36 CFR 28.13 - Variance, commercial and industrial application procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variance, commercial and industrial application procedures. 28.13 Section 28.13 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR FIRE ISLAND NATIONAL SEASHORE: ZONING STANDARDS Federal Standards and...
40 CFR 268.44 - Variance from a treatment standard.
Code of Federal Regulations, 2011 CFR
2011-07-01
... than) the concentrations necessary to minimize short- and long-term threats to human health and the... any given treatment variance is sufficient to minimize threats to human health and the environment... Selenium NA NA 25 mg/L TCLP NA. U.S. Ecology Idaho, Incorporated, Grandview, Idaho K08810 Standards under...
40 CFR 268.44 - Variance from a treatment standard.
Code of Federal Regulations, 2010 CFR
2010-07-01
... than) the concentrations necessary to minimize short- and long-term threats to human health and the... any given treatment variance is sufficient to minimize threats to human health and the environment... Selenium NA NA 25 mg/L TCLP NA. U.S. Ecology Idaho, Incorporated, Grandview, Idaho K08810 Standards under...
36 CFR 28.13 - Variance, commercial and industrial application procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variance, commercial and industrial application procedures. 28.13 Section 28.13 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR FIRE ISLAND NATIONAL SEASHORE: ZONING STANDARDS Federal Standards and...
48 CFR 9904.407-50 - Techniques for application.
Code of Federal Regulations, 2010 CFR
2010-10-01
... engineering studies, experience, or other supporting data) used in setting and revising standards; the period... their related variances may be recognized either at the time purchases of material are entered into the...-price standards are used and related variances are recognized at the time purchases of material are...
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
Weir, Christopher J.; Rubio, Noah; Rabinovich, Roberto; Pinnock, Hilary; Hanley, Janet; McCloughan, Lucy; Drost, Ellen M.; Mantoani, Leandro C.; MacNee, William; McKinstry, Brian
2016-01-01
Introduction The Bland-Altman limits of agreement method is widely used to assess how well the measurements produced by two raters, devices or systems agree with each other. However, mixed effects versions of the method which take into account multiple sources of variability are less well described in the literature. We address the practical challenges of applying mixed effects limits of agreement to the comparison of several devices to measure respiratory rate in patients with chronic obstructive pulmonary disease (COPD). Methods Respiratory rate was measured in 21 people with a range of severity of COPD. Participants were asked to perform eleven different activities representative of daily life during a laboratory-based standardised protocol of 57 minutes. A mixed effects limits of agreement method was used to assess the agreement of five commercially available monitors (Camera, Photoplethysmography (PPG), Impedance, Accelerometer, and Chest-band) with the current gold standard device for measuring respiratory rate. Results Results produced using mixed effects limits of agreement were compared to results from a fixed effects method based on analysis of variance (ANOVA) and were found to be similar. The Accelerometer and Chest-band devices produced the narrowest limits of agreement (-8.63 to 4.27 and -9.99 to 6.80 respectively) with mean bias -2.18 and -1.60 breaths per minute. These devices also had the lowest within-participant and overall standard deviations (3.23 and 3.29 for Accelerometer and 4.17 and 4.28 for Chest-band respectively). Conclusions The mixed effects limits of agreement analysis enabled us to answer the question of which devices showed the strongest agreement with the gold standard device with respect to measuring respiratory rates. In particular, the estimated within-participant and overall standard deviations of the differences, which are easily obtainable from the mixed effects model results, gave a clear indication that the Accelerometer and Chest-band devices performed best. PMID:27973556
Ding, Yao; Mohamed, Abdallah S R; Yang, Jinzhong; Colen, Rivka R; Frank, Steven J; Wang, Jihong; Wassal, Eslam Y; Wang, Wenjie; Kantor, Michael E; Balter, Peter A; Rosenthal, David I; Lai, Stephen Y; Hazle, John D; Fuller, Clifton D
2015-01-01
The purpose of this study was to investigate the potential of a head and neck magnetic resonance simulation and immobilization protocol on reducing motion-induced artifacts and improving positional variance for radiation therapy applications. Two groups (group 1, 17 patients; group 2, 14 patients) of patients with head and neck cancer were included under a prospective, institutional review board-approved protocol and signed informed consent. A 3.0-T magnetic resonance imaging (MRI) scanner was used for anatomic and dynamic contrast-enhanced acquisitions with standard diagnostic MRI setup for group 1 and radiation therapy immobilization devices for group 2 patients. The impact of magnetic resonance simulation/immobilization was evaluated qualitatively by 2 observers in terms of motion artifacts and positional reproducibility and quantitatively using 3-dimensional deformable registration to track intrascan maximum motion displacement of voxels inside 7 manually segmented regions of interest. The image quality of group 2 (29 examinations) was significantly better than that of group 1 (50 examinations) as rated by both observers in terms of motion minimization and imaging reproducibility (P < .0001). The greatest average maximum displacement was at the region of the larynx in the posterior direction for patients in group 1 (17 mm; standard deviation, 8.6 mm), whereas the smallest average maximum displacement was at the region of the posterior fossa in the superior direction for patients in group 2 (0.4 mm; standard deviation, 0.18 mm). Compared with group 1, maximum regional motion was reduced in group 2 patients in the oral cavity, floor of mouth, oropharynx, and larynx regions; however, the motion reduction reached statistical significance only in the regions of the oral cavity and floor of mouth (P < .0001). The image quality of head and neck MRI in terms of motion-related artifacts and positional reproducibility was greatly improved by use of radiation therapy immobilization devices. Consequently, immobilization with external and intraoral fixation in MRI examinations is required for radiation therapy application. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.
Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash
2017-01-01
Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.
Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction
Sayyari, Erfan; Mirarab, Siavash
2017-01-01
Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods. PMID:28800608
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi
2015-01-01
Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)
NASA Technical Reports Server (NTRS)
Gaposchkin, E. M.
1972-01-01
The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.
Influence of asymmetrical drawing radius deviation in micro deep drawing
NASA Astrophysics Data System (ADS)
Heinrich, L.; Kobayashi, H.; Shimizu, T.; Yang, M.; Vollertsen, F.
2017-09-01
Nowadays, an increasing demand for small metal parts in electronic and automotive industries can be observed. Deep drawing is a well-suited technology for the production of such parts due to its excellent qualities for mass production. However, the downscaling of the forming process leads to new challenges in tooling and process design, such as high relative deviation of tool geometry or blank displacement compared to the macro scale. FEM simulation has been a widely-used tool to investigate the influence of symmetrical process deviations as for instance a global variance of the drawing radius. This study shows a different approach that allows to determine the impact of asymmetrical process deviations on micro deep drawing. In this particular case the impact of an asymmetrical drawing radius deviation and blank displacement on cup geometry deviation was investigated for different drawing ratios by experiments and FEM simulation. It was found that both variations result in an increasing cup height deviation. Nevertheless, with increasing drawing ratio a constant drawing radius deviation has an increasing impact, while blank displacement results in a decreasing offset of the cups geometry. This is explained by different mechanisms that result in an uneven cup geometry. While blank displacement leads to material surplus on one side of the cup, an unsymmetrical radius deviation on the other hand generates uneven stretching of the cups wall. This is intensified for higher drawing ratios. It can be concluded that the effect of uneven radius geometry proves to be of major importance for the production of accurately shaped micro cups and cannot be compensated by intentional blank displacement.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z
2016-08-15
In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.
Operator performance and localized muscle fatigue in a simulated space vehicle control task
NASA Technical Reports Server (NTRS)
Lewis, J. L., Jr.
1979-01-01
Fourier transforms in a special purpose computer were utilized to obtain power spectral density functions from electromyograms of the biceps brachii, triceps brachii, brachioradialis, flexor carpi ulnaris, brachialis, and pronator teres in eight subjects performing isometric tracking tasks in two directions utilizing a prototype spacecraft rotational hand controller. Analysis of these spectra in general purpose computers aided in defining muscles involved in performing the task, and yielded a derived measure potentially useful in predicting task termination. The triceps was the only muscle to show significant differences in all possible tests for simple effects in both tasks and, overall, was the most consistently involved of the six muscles. The total power monitored for triceps, biceps, and brachialis dropped to minimal levels across all subjects earlier than for other muscles. However, smaller variances existed for the biceps, brachioradialis, brachialis, and flexor carpi ulnaris muscles and could provide longer predictive times due to smaller standard deviations for a greater population range.
Comparison of beam position calculation methods for application in digital acquisition systems
NASA Astrophysics Data System (ADS)
Reiter, A.; Singh, R.
2018-05-01
Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.
Turbulent thermal superstructures in Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Stevens, Richard J. A. M.; Blass, Alexander; Zhu, Xiaojue; Verzicco, Roberto; Lohse, Detlef
2018-04-01
We report the observation of superstructures, i.e., very large-scale and long living coherent structures in highly turbulent Rayleigh-Bénard convection up to Rayleigh Ra=109 . We perform direct numerical simulations in horizontally periodic domains with aspect ratios up to Γ =128 . In the considered Ra number regime the thermal superstructures have a horizontal extend of six to seven times the height of the domain and their size is independent of Ra. Many laboratory experiments and numerical simulations have focused on small aspect ratio cells in order to achieve the highest possible Ra. However, here we show that for very high Ra integral quantities such as the Nusselt number and volume averaged Reynolds number only converge to the large aspect ratio limit around Γ ≈4 , while horizontally averaged statistics such as standard deviation and kurtosis converge around Γ ≈8 , the integral scale converges around Γ ≈32 , and the peak position of the temperature variance and turbulent kinetic energy spectra only converge around Γ ≈64 .
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
Sample size calculation in economic evaluations.
Al, M J; van Hout, B A; Michel, B C; Rutten, F F
1998-06-01
A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.
Mukumbang, Ferdinand C; Alindekane, Leka Marcel
2017-04-01
The aim of this study was to explore the teacher identity formation dynamics of student nurse-educators about the subject matter, pedagogy and didactics. A case study using descriptive quantitative design was employed. Using a cross-sectional approach, data were collected in 2014 using a self-administered questionnaire. Participants were asked to self-evaluate their teaching competencies on the nursing subject matter, pedagogical expertise and didactical expertise. Using descriptive analysis we determined the central tendencies of the constructs. The descriptive analysis revealed a very small variance (0.0011) and standard deviation (0.04) among the means of the three constructs, which indicates a fair balance in the contribution of the subject matter, pedagogy and didactics towards teacher identity formation. Nursing student-educators can achieve a balanced combination of subject matter expert, pedagogical expert and didactical expert combination during the formation of their teacher identity. This could be indicative of how effective the training programme is in helping the students achieve a balanced teacher identity.
NASA Astrophysics Data System (ADS)
Selçuk, Gamze S.; Çalişkan, Serap; Erol, Mustafa
2007-04-01
Learning strategy concept was introduced in the education field from the development of cognitive psychology. Learning strategies are behaviors and thoughts that a learner engages in during learning which are intended to influence the learner's encoding process. Literature on learning strategies in physics field is very scarce. Participants of the research consist of teacher candidates (n=137) from 1st, 2nd, 3rd, 4th and 5th grade attending Department of Physics Education, Education Faculty of Buca, Dokuz Eylül University in Turkey. Data of this research was collected by ``Scale of Learning Strategies Usage in Physics'' (Cronbach's Alpha=0.93). Mean, Standard Deviation, Analysis of Variance were used to analyze the research data. This paper reports on teacher candidates' learning strategies used in physics education The paper investigates the relationships between learning strategies and physics achievement, class level. Some important outcomes of the research are presented, discussed and certain suggestions are made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Shaun; Potter, Charles; Medich, David
A recent analysis of historical radionuclide resuspension datasets con rmed the general applicability of the Anspaugh and modified Anspaugh models of resuspension factors following both controlled and disastrous releases. The observations appear to increase in variance earlier in time, however all points were equally weighted in statistical fit calculations, inducing a positive skewing of resuspension coeffcients. Such data are extracted from the available deposition experiments spanning 2900 days. Measurements within a 3-day window are grouped into singular sample sets to construct standard deviations. A refitting is performed using a relative instrumental weighting of the observations. The resulting best-fit equations producesmore » tamer exponentials which give decreased integrated resuspension factor values relative to those reported by Anspaugh. As expected, the fits attenuate greater error amongst the data at earlier time. The reevaluation provides a sharper contrast between the empirical models, and reafirms their deficiencies in the short-lived timeframe wherein the dynamics of particulate dispersion dominate the resuspension process.« less
Measurement System Analyses - Gauge Repeatability and Reproducibility Methods
NASA Astrophysics Data System (ADS)
Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej
2018-02-01
The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.
Teismann, Tobias; Glaesmer, Heide; von Brachel, Ruth; Siegmann, Paula; Forkmann, Thomas
2017-10-01
The interpersonal-psychological theory of suicidal behavior posits that 2 proximal, causal, and interactive risk factors must be present for someone to desire suicide: perceived burdensomeness and thwarted belongingness. The purpose of the present study was to evaluate the predictive power of these 2 risk factors in a prospective study. A total of 231 adult outpatients (age: mean = 38.1, standard deviation = 12.3) undergoing cognitive-behavioral therapy took part in a pretreatment and a midtreatment assessment after the 10th therapy session. Perceived burdensomeness, thwarted belongingness, and the interaction between these 2 risk factors did not add incremental variance to the prediction of midtreatment suicide ideation after controlling for age, gender, depression, hopelessness, impulsivity, lifetime suicide attempts, and pretreatment suicide ideation. The best predictor of midtreatment suicide ideation was pretreatment suicide ideation. Results offer only limited support to the assumptions of the interpersonal theory of suicide. © 2017 Wiley Periodicals, Inc.
Liu, Timothy Y.; Sanders, Jason L.; Tsui, Fu-Chiang; Espino, Jeremy U.; Dato, Virginia M.; Suyama, Joe
2013-01-01
We studied the association between OTC pharmaceutical sales and volume of patients with influenza-like-illnesses (ILI) at an urgent care center over one year. OTC pharmaceutical sales explain 36% of the variance in the patient volume, and each standard deviation increase is associated with 4.7 more patient visits to the urgent care center (p<0.0001). Cross-correlation function analysis demonstrated that OTC pharmaceutical sales are significantly associated with patient volume during non-flu season (p<0.0001), but only the sales of cough and cold (p<0.0001) and thermometer (p<0.0001) categories were significant during flu season with a lag of two and one days, respectively. Our study is the first study to demonstrate and measure the relationship between OTC pharmaceutical sales and urgent care center patient volume, and presents strong evidence that OTC sales predict urgent care center patient volume year round. PMID:23555647
The truly remarkable universality of half a standard deviation: confirmation through another look.
Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W
2004-10-01
In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Effect of multizone refractive multifocal contact lenses on standard automated perimetry.
Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa
2012-09-01
The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.
Systems, Subjects, Sessions: To What Extent Do These Factors Influence EEG Data?
Melnik, Andrew; Legkov, Petr; Izdebski, Krzysztof; Kärcher, Silke M; Hairston, W David; Ferris, Daniel P; König, Peter
2017-01-01
Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses.
Systems, Subjects, Sessions: To What Extent Do These Factors Influence EEG Data?
Melnik, Andrew; Legkov, Petr; Izdebski, Krzysztof; Kärcher, Silke M.; Hairston, W. David; Ferris, Daniel P.; König, Peter
2017-01-01
Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses. PMID:28424600
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-23
... deviate from the reservoir elevation rule curve stipulated under Article 401 of the project license. GRDA... on August 1, and instead implement release rates equivalent to 0.03 to 0.06 foot of reservoir elevation per day, beginning on August 1. Reservoir elevations under the proposal would be above the rule...
29 CFR 1905.10 - Variances and other relief under section 6(b)(6)(A).
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Variances and other relief under section 6(b)(6)(A). 1905... section 6(b)(6)(A). (a) Application for variance. Any employer, or class of employers, desiring a variance from a standard, or portion thereof, authorized by section 6(b)(6)(A) of the Act may file a written...
Stenzel, O; Wilbrandt, S; Wolf, J; Schürmann, M; Kaiser, N; Ristau, D; Ehlers, H; Carstens, F; Schippel, S; Mechold, L; Rauhut, R; Kennedy, M; Bischoff, M; Nowitzki, T; Zöller, A; Hagedorn, H; Reus, H; Hegemann, T; Starke, K; Harhausen, J; Foest, R; Schumacher, J
2017-02-01
Random effects in the repeatability of refractive index and absorption edge position of tantalum pentoxide layers prepared by plasma-ion-assisted electron-beam evaporation, ion beam sputtering, and magnetron sputtering are investigated and quantified. Standard deviations in refractive index between 4*10-4 and 4*10-3 have been obtained. Here, lowest standard deviations in refractive index close to our detection threshold could be achieved by both ion beam sputtering and plasma-ion-assisted deposition. In relation to the corresponding mean values, the standard deviations in band-edge position and refractive index are of similar order.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
Standard errors in forest area
Joseph McCollum
2002-01-01
I trace the development of standard error equations for forest area, beginning with the theory behind double sampling and the variance of a product. The discussion shifts to the particular problem of forest area - at which time the theory becomes relevant. There are subtle difficulties in figuring out which variance of a product equation should be used. The equations...
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Introducing the Mean Absolute Deviation "Effect" Size
ERIC Educational Resources Information Center
Gorard, Stephen
2015-01-01
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
Hopper, John L.
2015-01-01
How can the “strengths” of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors—and that is how risk gradients are interpreted—so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RRs. This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. PMID:26520360
NASA Astrophysics Data System (ADS)
Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus
2018-05-01
Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.
Fischer, A; Friggens, N C; Berry, D P; Faverdin, P
2018-07-01
The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.
Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John
Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.
2011-02-01
The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of small structures (blood vessels, small tumours, kidney cortex structure) and image noise in homogeneous body parts (healthy liver background). Highest IQ for the Inveon PET scanner was obtained using MAP reconstruction with uniform variance. The setting of β depended on the specific imaging goals.
NASA Astrophysics Data System (ADS)
Parey, S.
2014-12-01
F. J. Acero1, S. Parey2, T.T.H. Hoang2, D. Dacunha-Castelle31Dpto. Física, Universidad de Extremadura, Avda. de Elvas s/n, 06006, Badajoz 2EDF/R&D, 6 quai Watier, 78401 Chatou Cedex, France 3Laboratoire de Mathématiques, Université Paris 11, Orsay, France Trends can already be detected in daily rainfall amount in the Iberian Peninsula (IP), and this will have an impact on the extreme levels. In this study, we compare different ways to estimate future return levels for heavy rainfall, based on the statistical extreme value theory. Both Peaks over Threshold (POT) and block maxima with the Generalized Extreme Value (GEV) distribution will be used and their results compared when linear trends are assumed in the parameters: threshold and scale parameter for POT and location and scale parameter for GEV. But rainfall over the IP is a special variable in that a large number of the values are 0. Thus, the impact of taking this into account is discussed too. Another approach is then tested, based on the evolutions of the mean and variance obtained from the time series of rainy days only, and of the number of rainy days. A statistical test, similar to that designed for temperature in Parey et al. 2013, is used to assess if the trends in extremes can be considered as mostly due to these evolutions when considering only rainy days. The results show that it is mainly the case: the extremes of the residuals, after removing the trends in mean and standard deviation, cannot be differentiated from those of a stationary process. Thus, the future return levels can be estimated from the stationary return level of these residuals and an estimation of the future mean and standard deviation. Moreover, an estimation of the future number of rainy days is used to retrieve the return levels for all days. All of these comparisons are made for an ensemble of high quality rainfall time series observed in the Iberian Peninsula over the period 1961-2010, from which we want to estimate a 20-year return level expected in 2020. The evolutions and the impact of the different approaches will be discussed for 3 seasons: fall, spring and winter. Parey S., Hoang T.T.H., Dacunha-Castelle D.: The importance of mean and variance in predicting changes in temperature extremes, Journal of Geophysical Research: Atmospheres, Vol. 118, 1-12, 2013.
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Estimating maize water stress by standard deviation of canopy temperature in thermal imagery
USDA-ARS?s Scientific Manuscript database
A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fave, X; Fried, D; UT Health Science Center Graduate School of Biomedical Sciences, Houston, TX
2015-06-15
Purpose: Several studies have demonstrated the prognostic potential for texture features extracted from CT images of non-small cell lung cancer (NSCLC) patients. The purpose of this study was to determine if these features could be extracted with high reproducibility from cone-beam CT (CBCT) images in order for features to be easily tracked throughout a patient’s treatment. Methods: Two materials in a radiomics phantom, designed to approximate NSCLC tumor texture, were used to assess the reproducibility of 26 features. This phantom was imaged on 9 CBCT scanners, including Elekta and Varian machines. Thoracic and head imaging protocols were acquired on eachmore » machine. CBCT images from 27 NSCLC patients imaged using the thoracic protocol on Varian machines were obtained for comparison. The variance for each texture measured from these patients was compared to the variance in phantom values for different manufacturer/protocol subsets. Levene’s test was used to identify features which had a significantly smaller variance in the phantom scans versus the patient data. Results: Approximately half of the features (13/26 for material1 and 15/26 for material2) had a significantly smaller variance (p<0.05) between Varian thoracic scans of the phantom compared to patient scans. Many of these same features remained significant for the head scans on Varian (12/26 and 8/26). However, when thoracic scans from Elekta and Varian were combined, only a few features were still significant (4/26 and 5/26). Three features (skewness, coarsely filtered mean and standard deviation) were significant in almost all manufacturer/protocol subsets. Conclusion: Texture features extracted from CBCT images of a radiomics phantom are reproducible and show significantly less variation than the same features measured from patient images when images from the same manufacturer or with similar parameters are used. Reproducibility between CBCT scanners may be high enough to allow the extraction of meaningful texture values for patients. This project was funded in part by the Cancer Prevention Research Institute of Texas (CPRIT). Xenia Fave is a recipient of the American Association of Physicists in Medicine Graduate Fellowship.« less
Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.
2011-01-01
Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Sampogna, F; Johansson, V; Axtelius, B; Abeni, D; Söderfeldt, B
2008-12-01
In a previous study, we observed that the concordance between patients' and caregivers' evaluation of oral health-related quality of life (OHRQoL) was low. The aim of this study was to use multilevel analysis to investigate the possible determinants of the low concordance, taking into account different patients' demographic and clinical variables, the financial system used by patients to pay for dental treatment, and the role of the different caregivers and clinics. The OHRQoL of patients was assessed both by the patients and by their caregivers, using the Oral Health Impact Profile (OHIP)-14. Data were collected in four clinics, and patients were evaluated by one of 27 caregivers. We tested eight multilevel models, using the difference (caregivers OHIP - patients OHIP) as the dependent variable. Data were complete for 432 patients. The mean difference was 4.4 (standard deviation = 8.2; higher scores indicated a higher impact on OHRQoL). The variance due to patients was partly explained by their age, gender, and number of teeth, with a greater OHIP difference for older vs. younger patients, for women than for men, and in patients with fewer teeth. Almost 30% of the variance was due to caregivers, while the effect of clinics was not significant. It is important to study the possible causes of the different judgments concerning patients' OHRQoL by patients and caregivers, in order to improve the patients' satisfaction with care.
YALE NATURAL RADIOCARBON MEASUREMENTS. PART VI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuiver, M.; Deevey, E.S.
1961-01-01
Most of the measurements made since publication of Yale V are included; some measurements, such as a series collected in Greenland, are withneld pending additional information or field work that will make better interpretations possible. In addition to radiocarbon dates of geologic and/or archaeologic interest, recent assays are given of C/sup 14/ in lake waters and other lacustrine materials, now normalized for C/sup 13/ content. The newly accepted convention is followed in expressing normalized C/sup 14/ values as DELTA = delta C/sup 14/ (2 delta C/sup 13/ + 50)STAl + ( delta C/sup 14//1000)! where DELTA is the per milmore » deviation of the C/sup 14/ if the sample from any contemporary standard (whether organic or a carbonate) after correction of sample and/or standard for real age, for the Suess effect, for normal isotopic fractionation, and for deviations of C/sup 14/ content of the age- and pollution- corrected l9th-century wood standard from that of 95% of the NBS oxalic acid standard; delta C/sup 14/ is the measured deviation from 95% of the NBS standard, and delta C/sup 13/ is the deviation from the NBS limestone standard, both in per mil. These assays are variously affected by artificial C/sup 14/ resulting from nuclear tests. (auth)« less
Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio
2014-06-01
Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P < 0.01, 0.05, and 0.001, respectively) decreased in women with climacteric symptoms. There was a negative correlation between the standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... compliance plan proposed by the applicant can reasonably be implemented and will achieve compliance as...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.
A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less
Decomposing genomic variance using information from GWA, GWE and eQTL analysis.
Ehsani, A; Janss, L; Pomp, D; Sørensen, P
2016-04-01
A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance. © 2015 Stichting International Foundation for Animal Genetics.
Comparison Groups in Short Interrupted Time-Series: An Illustration Evaluating No Child Left Behind
ERIC Educational Resources Information Center
Wong, Manyee; Cook, Thomas D.; Steiner, Peter M.
2009-01-01
Interrupted time-series (ITS) are often used to assess the causal effect of a planned or even unplanned shock introduced into an on-going process. The pre-intervention slope is supposed to index the causal counterfactual, and deviations from it in mean, slope or variance are used to indicate an effect. However, a secure causal inference is only…
A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.
Houseman, E Andres; Virji, M Abbas
2017-08-01
Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates were significant in some frequentist models, but in the Bayesian model their credible intervals contained zero; such discrepancies were observed in multiple datasets. Variance components from the Bayesian model reflected substantial autocorrelation, consistent with the frequentist models, except for the auto-regressive moving average model. Plots of means from the Bayesian model showed good fit to the observed data. The proposed Bayesian model provides an approach for modeling non-stationary autocorrelation in a hierarchical modeling framework to estimate task means, standard deviations, quantiles, and parameter estimates for covariates that are less biased and have better performance characteristics than some of the contemporary methods. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Measuring systems of hard to get objects: problems with analysis of measurement results
NASA Astrophysics Data System (ADS)
Gilewska, Grazyna
2005-02-01
The problem accessibility of metrological parameters features of objects appeared in many measurements. Especially if it is biological object which parameters very often determined on the basis of indirect research. Accidental component predominate in forming of measurement results with very limited access to measurement objects. Every measuring process has a lot of conditions limiting its abilities to any way processing (e.g. increase number of measurement repetition to decrease random limiting error). It may be temporal, financial limitations, or in case of biological object, small volume of sample, influence measuring tool and observers on object, or whether fatigue effects e.g. at patient. It's taken listing difficulties into consideration author worked out and checked practical application of methods outlying observation reduction and next innovative methods of elimination measured data with excess variance to decrease of mean standard deviation of measured data, with limited aomunt of data and accepted level of confidence. Elaborated methods wee verified on the basis of measurement results of knee-joint width space got from radiographs. Measurements were carried out by indirectly method on the digital images of radiographs. Results of examination confirmed legitimacy to using of elaborated methodology and measurement procedures. Such methodology has special importance when standard scientific ways didn't bring expectations effects.
Statistical tests for power-law cross-correlated processes
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene
2011-12-01
For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
Selection and Classification Using a Forecast Applicant Pool.
ERIC Educational Resources Information Center
Hendrix, William H.
The document presents a forecast model of the future Air Force applicant pool. By forecasting applicants' quality (means and standard deviations of aptitude scores) and quantity (total number of applicants), a potential enlistee could be compared to the forecasted pool. The data used to develop the model consisted of means, standard deviation, and…
NASA Technical Reports Server (NTRS)
Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.
1976-01-01
The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.
Wavelength selection method with standard deviation: application to pulse oximetry.
Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija
2011-07-01
Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
Association of auricular pressing and heart rate variability in pre-exam anxiety students.
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-03-25
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.
Association of auricular pressing and heart rate variability in pre-exam anxiety students
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-01-01
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Rigby, Jane Rebecca; Malhotra, Sangeeta; Allam, Sahar; Carilli, Chris; Combes, Francoise; Finkelstein, Keely; Finkelstein, Steven; Frye, Brenda; Gerin, Maryvonne;
2014-01-01
We report on two regularly rotating galaxies at redshift z approx. = 2, using high-resolution spectra of the bright [C microns] 158 micrometers emission line from the HIFI instrument on the Herschel Space Observatory. Both SDSS090122.37+181432.3 ("S0901") and SDSSJ120602.09+514229.5 ("the Clone") are strongly lensed and show the double-horned line profile that is typical of rotating gas disks. Using a parametric disk model to fit the emission line profiles, we find that S0901 has a rotation speed of v sin(i) approx. = 120 +/- 7 kms(sup -1) and a gas velocity dispersion of (standard deviation)g < 23 km s(sup -1) (1(standard deviation)). The best-fitting model for the Clone is a rotationally supported disk having v sin(i) approx. = 79 +/- 11 km s(sup -1) and (standard deviation)g 4 kms(sup -1) (1(standard deviation)). However, the Clone is also consistent with a family of dispersion-dominated models having (standard deviation)g = 92 +/- 20 km s(sup -1). Our results showcase the potential of the [C microns] line as a kinematic probe of high-redshift galaxy dynamics: [C microns] is bright, accessible to heterodyne receivers with exquisite velocity resolution, and traces dense star-forming interstellar gas. Future [C microns] line observations with ALMA would offer the further advantage of spatial resolution, allowing a clearer separation between rotation and velocity dispersion.
Soave, David; Sun, Lei
2017-09-01
We generalize Levene's test for variance (scale) heterogeneity between k groups for more complex data, when there are sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic χk-12/(k-1) distribution of the generalized scale (gS) test statistic. We then show that the proposed gS test is independent of the generalized location test, under the joint null hypothesis of no mean and no variance heterogeneity. Consequently, we generalize the recently proposed joint location-scale (gJLS) test, valuable in settings where there is an interaction effect but one interacting variable is not available. We evaluate the proposed method via an extensive simulation study and two genetic association application studies. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, D; Meier, J; Mawlawi, O
Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Myhre, G.; Penner, J. E.; Randles, C.; Samset, B.; Schulz, M.; Yu, H.; Zhou, C.
2012-09-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in nine participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.51 W m-2 and the inter-model standard deviation is 0.70 W m-2, corresponding to a relative standard deviation of 15%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.26 W m-2, and the standard deviation increases to 1.21 W m-2, corresponding to a significant relative standard deviation of 96%. However, the top-of-atmosphere forcing variability owing to absorption is low, with relative standard deviations of 9% clear-sky and 12% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment, demonstrates that host model uncertainties could explain about half of the overall sulfate forcing diversity of 0.13 W m-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
40 CFR 63.7751 - What reports must I submit and when?
Code of Federal Regulations, 2010 CFR
2010-07-01
... deviations from any emissions limitations (including operating limit), work practice standards, or operation and maintenance requirements, a statement that there were no deviations from the emissions limitations...-of-control during the reporting period. (7) For each deviation from an emissions limitation...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
...-2502, Retaining and Flood Walls, 29 September 1989. h. Engineer Technical Letter (ETL) 1110-2-575... vegetation variance request. h. All final documentation for the vegetation variance request shall be uploaded..., especially for I-walls of concern as identified per Paragraph 3.h. For floodwalls, the landside and waterside...
flowVS: channel-specific variance stabilization in flow cytometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Rajwa, Bartek; Pothen, Alex
Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.
flowVS: channel-specific variance stabilization in flow cytometry
Azad, Ariful; Rajwa, Bartek; Pothen, Alex
2016-07-28
Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.
Sleep Duration and Area-Level Deprivation in Twins
Watson, Nathaniel F.; Horn, Erin; Duncan, Glen E.; Buchwald, Dedra; Vitiello, Michael V.; Turkheimer, Eric
2016-01-01
Study Objectives: We used quantitative genetic models to assess whether area-level deprivation as indicated by the Singh Index predicts shorter sleep duration and modifies its underlying genetic and environmental contributions. Methods: Participants were 4,218 adult twin pairs (2,377 monozygotic and 1,841 dizygotic) from the University of Washington Twin Registry. Participants self-reported habitual sleep duration. The Singh Index was determined by linking geocoding addresses to 17 indicators at the census-tract level using data from Census of Washington State and Census Tract Cartographic Boundary Files from 2000 and 2010. Data were analyzed using univariate and bivariate genetic decomposition and quantitative genetic interaction models that assessed A (additive genetics), C (common environment), and E (unique environment) main effects of the Singh Index on sleep duration and allowed the magnitude of residual ACE variance components in sleep duration to vary with the Index. Results: The sample had a mean age of 38.2 y (standard deviation [SD] = 18), and was predominantly female (62%) and Caucasian (91%). Mean sleep duration was 7.38 h (SD = 1.20) and the mean Singh Index score was 0.00 (SD = 0.89). The heritability of sleep duration was 39% and the Singh Index was 12%. The uncontrolled phenotypic regression of sleep duration on the Singh Index showed a significant negative relationship between area-level deprivation and sleep length (b = −0.080, P < 0.001). Every 1 SD in Singh Index was associated with a ∼4.5 min change in sleep duration. For the quasi-causal bivariate model, there was a significant main effect of E (b0E = −0.063; standard error [SE] = 0.30; P < 0.05). Residual variance components unique to sleep duration were significant for both A (b0Au = 0.734; SE = 0.020; P < 0.001) and E (b0Eu = 0.934; SE = 0.013; P < 0.001). Conclusions: Area-level deprivation has a quasi-causal association with sleep duration, with greater deprivation being related to shorter sleep. As area-level deprivation increases, unique genetic and nonshared environmental residual variance in sleep duration increases. Citation: Watson NF, Horn E, Duncan GE, Buchwald D, Vitiello MV, Turkheimer E. Sleep duration and area-level deprivation in twins. SLEEP 2016;39(1):67– 77. PMID:26285009
In vitro marginal fit of three all-ceramic crown systems.
Yeo, In-Sung; Yang, Jae-Ho; Lee, Jai-Bong
2003-11-01
Studies on marginal discrepancies of single restorations using various systems and materials have resulted in statistical inferences that are ambiguous because of small sample sizes and limited numbers of measurements per specimen. The purpose of this study was to compare the marginal adaptation of single anterior restorations made using different systems. The in vitro marginal discrepancies of 3 different all-ceramic crown systems (Celay In-Ceram, conventional In-Ceram, and IPS Empress 2 layering technique), and a control group of metal ceramic restorations were evaluated and compared by measuring the gap dimension between the crowns and the prepared tooth at the marginal opening. The crowns were made for 1 extracted maxillary central incisor prepared with a 1-mm shoulder margin and 6-degree tapered walls by milling. Thirty crowns per system were fabricated. Crown measurements were recorded with an optical microscope, with an accuracy of +/-0.1 microm, at 50 points spaced approximately 400 microm along the circumferential margin. The criterion of 120 microm was used as the maximum clinically acceptable marginal gap. Mean gap dimensions and standard deviations were calculated for marginal opening. The data were analyzed with a 1-way analysis of variance (alpha=.05). Mean gap dimensions and standard deviations at the marginal opening for the incisor crowns were 87 +/- 34 microm for control, 83 +/- 33 microm for Celay In-Ceram, 112 +/- 55 microm for conventional In-Ceram, and 46 +/- 16 microm for the IPS Empress 2 layering technique. Significant differences were found among the crown groups (P<.05). Compared with the control group, the IPS Empress 2 group had significantly smaller marginal discrepancies (P<.05), and the conventional In-Ceram group exhibited significantly greater marginal discrepancies (P<.05). There was no significant difference between the Celay In-Ceram and the control group. Within the limitations of this study, the marginal discrepancies were all within the clinically acceptable standard set at 120 microm. However, the IPS Empress 2 system showed the smallest and most homogeneous gap dimension, whereas the conventional In-Ceram system presented the largest and more variable gap dimension compared with the metal ceramic (control) restoration.
Algae Tile Data: 2004-2007, BPA-51; Preliminary Report, October 28, 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holderman, Charles
Multiple files containing 2004 through 2007 Tile Chlorophyll data for the Kootenai River sites designated as: KR1, KR2, KR3, KR4 (Downriver) and KR6, KR7, KR9, KR9.1, KR10, KR11, KR12, KR13, KR14 (Upriver) were received by SCS. For a complete description of the sites covered, please refer to http://ktoi.scsnetw.com. To maintain consistency with the previous SCS algae reports, all analyses were carried out separately for the Upriver and Downriver categories, as defined in the aforementioned paragraph. The Upriver designation, however, now includes three additional sites, KR11, KR12, and the nutrient addition site, KR9.1. Summary statistics and information on the four responses,more » chlorophyll a, chlorophyll a Accrual Rate, Total Chlorophyll, and Total Chlorophyll Accrual Rate are presented in Print Out 2. Computations were carried out separately for each river position (Upriver and Downriver) and year. For example, the Downriver position in 2004 showed an average Chlorophyll a level of 25.5 mg with a standard deviation of 21.4 and minimum and maximum values of 3.1 and 196 mg, respectively. The Upriver data in 2004 showed a lower overall average chlorophyll a level at 2.23 mg with a lower standard deviation (3.6) and minimum and maximum values of (0.13 and 28.7, respectively). A more comprehensive summary of each variable and position is given in Print Out 3. This lists the information above as well as other summary information such as the variance, standard error, various percentiles and extreme values. Using the 2004 Downriver Chlorophyll a as an example again, the variance of this data was 459.3 and the standard error of the mean was 1.55. The median value or 50th percentile was 21.3, meaning 50% of the data fell above and below this value. It should be noted that this value is somewhat different than the mean of 25.5. This is an indication that the frequency distribution of the data is not symmetrical (skewed). The skewness statistic, listed as part of the first section of each analysis, quantifies this. In a symmetric distribution, such as a Normal distribution, the skewness value would be 0. The tile chlorophyll data, however, shows larger values. Chlorophyll a, in the 2004 Downriver example, has a skewness statistic of 3.54, which is quite high. In the last section of the summary analysis, the stem and leaf plot graphically demonstrates the asymmetry, showing most of the data centered around 25 with a large value at 196. The final plot is referred to as a normal probability plot and graphically compares the data to a theoretical normal distribution. For chlorophyll a, the data (asterisks) deviate substantially from the theoretical normal distribution (diagonal reference line of pluses), indicating that the data is non-normal. Other response variables in both the Downriver and Upriver categories also indicated skewed distributions. Because the sample size and mean comparison procedures below require symmetrical, normally distributed data, each response in the data set was logarithmically transformed. The logarithmic transformation, in this case, can help mitigate skewness problems. The summary statistics for the four transformed responses (log-ChlorA, log-TotChlor, and log-accrual ) are given in Print Out 4. For the 2004 Downriver Chlorophyll a data, the logarithmic transformation reduced the skewness value to -0.36 and produced a more bell-shaped symmetric frequency distribution. Similar improvements are shown for the remaining variables and river categories. Hence, all subsequent analyses given below are based on logarithmic transformations of the original responses.« less
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
ERIC Educational Resources Information Center
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
Screen Twice, Cut Once: Assessing the Predictive Validity of Teacher Selection Tools
ERIC Educational Resources Information Center
Goldhaber, Dan; Grout, Cyrus; Huntington-Klein, Nick
2015-01-01
It is well documented that teachers can have profound effects on student outcomes. Empirical estimates find that a one standard deviation increase in teacher quality raises student test achievement by 10 to 25 percent of a standard deviation. More recent evidence shows that the effectiveness of teachers can affect long-term student outcomes, such…
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Parabolic trough receiver heat loss and optical efficiency round robin 2015/2016
NASA Astrophysics Data System (ADS)
Pernpeintner, Johannes; Schiricke, Björn; Sallaberry, Fabienne; de Jalón, Alberto García; López-Martín, Rafael; Valenzuela, Loreto; de Luca, Antonio; Georg, Andreas
2017-06-01
A round robin for parabolic trough receiver heat loss and optical efficiency in the laboratory was performed between five institutions using five receivers in 2015/2016. Heat loss testing was performed at three cartridge heater test benches and one Joule heating test bench in the temperature range between 100 °C and 550 °C. Optical efficiency testing was performed with two spectrometric test bench and one calorimetric test bench. Heat loss testing results showed standard deviations at the order of 6% to 12 % for most temperatures and receivers and a standard deviation of 17 % for one receiver at 100 °C. Optical efficiency is presented normalized for laboratories showing standard deviations of 0.3 % to 1.3 % depending on the receiver.
Benign positional vertigo and hyperuricaemia.
Adam, A M
2005-07-01
To find out if there is any association between serum uric acid level and positional vertigo. A prospective, case controlled study. A private neurological clinic. All patients presenting with vertigo. Ninety patients were seen in this period with 78 males and 19 females. Mean age was 47 +/- 3 years (at 95% confidence level) with a standard deviation of 12.4. Their mean uric acid level was 442 +/- 16 (at 95% confidence level) with a standard deviation of 79.6 umol/l as compared to 291 +/- 17 (at 95% confidence level) with a standard deviation of 79.7 umol/l in the control group. The P-value was less than 0.001. That there is a significant association between high uric acid and benign positional vertigo.
NASA Technical Reports Server (NTRS)
Clark, P. E.; Andre, C. G.; Adler, I.; Weidner, J.; Podwysocki, M.
1976-01-01
The positive correlation between Al/Si X-ray fluorescence intensity ratios determined during the Apollo 15 lunar mission and a broad-spectrum visible albedo of the moon is quantitatively established. Linear regression analysis performed on 246 1 degree geographic cells of X-ray fluorescence intensity and visible albedo data points produced a statistically significant correlation coefficient of .78. Three distinct distributions of data were identified as (1) within one standard deviation of the regression line, (2) greater than one standard deviation below the line, and (3) greater than one standard deviation above the line. The latter two distributions of data were found to occupy distinct geographic areas in the Palus Somni region.
Screening Samples for Arsenic by Inductively Coupled Plasma-Mass Spectrometry for Treaty Samples
2014-02-01
2.274 3.657 10.06 14.56 30.36 35.93 % RSD : 15.87% 4.375% 2.931% 4.473% 3.349% 3.788% 2.802% 3.883% 3.449% RSD , relative standard deviation 9 Table...107.9% 106.4% Standard Deviation: 0.3171 0.3498 0.8024 2.964 4.526 10.06 13.83 16.38 11.81 % RSD : 5.657% 3.174% 3.035% 5.507% 4.332% 3.795% 2.626...119.1% 116.5% 109.4% 106.8% 105.2% 105.5% 105.8% 108.6% 107.8% Standard Deviation: 0.2379 0.5595 1.173 2.375 2.798 5.973 11.79 15.10 30.54 % RSD
A deviation display method for visualising data in mobile gamma-ray spectrometry.
Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Ostlund, Karl; Samuelsson, Christer
2010-09-01
A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialization time of about 10min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.
Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A; Sánchez-Labraca, Nuria; Quesada-Rubio, José Manuel; Granero-Molina, José; Moreno-Lorenzo, Carmen
2011-01-01
Fibromyalgia is a prevalent musculoskeletal disorder associated with widespread mechanical tenderness, fatigue, non-refreshing sleep, depressed mood and pervasive dysfunction of the autonomic nervous system: tachycardia, postural intolerance, Raynaud's phenomenon and diarrhoea. To determine the effects of craniosacral therapy on sensitive tender points and heart rate variability in patients with fibromyalgia. A randomized controlled trial. Ninety-two patients with fibromyalgia were randomly assigned to an intervention group or placebo group. Patients received treatments for 20 weeks. The intervention group underwent a craniosacral therapy protocol and the placebo group received sham treatment with disconnected magnetotherapy equipment. Pain intensity levels were determined by evaluating tender points, and heart rate variability was recorded by 24-hour Holter monitoring. After 20 weeks of treatment, the intervention group showed significant reduction in pain at 13 of the 18 tender points (P < 0.05). Significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement versus baseline values were observed in the intervention group but not in the placebo group. At two months and one year post therapy, the intervention group showed significant differences versus baseline in tender points at left occiput, left-side lower cervical, left epicondyle and left greater trochanter and significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement. Craniosacral therapy improved medium-term pain symptoms in patients with fibromyalgia.
Code of Federal Regulations, 2010 CFR
2010-07-01
... which are distinct from the standard deviation process and specific to the requirements of the Federal... agency request a deviation from the provisions of this part? 102-38.30 Section 102-38.30 Public Contracts... executive agency request a deviation from the provisions of this part? Refer to §§ 102-2.60 through 102-2...
Hanscombe, Ken B.; Trzaskowski, Maciej; Haworth, Claire M. A.; Davis, Oliver S. P.; Dale, Philip S.; Plomin, Robert
2012-01-01
Background The environment can moderate the effect of genes - a phenomenon called gene-environment (GxE) interaction. Several studies have found that socioeconomic status (SES) modifies the heritability of children's intelligence. Among low-SES families, genetic factors have been reported to explain less of the variance in intelligence; the reverse is found for high-SES families. The evidence however is inconsistent. Other studies have reported an effect in the opposite direction (higher heritability in lower SES), or no moderation of the genetic effect on intelligence. Methods Using 8716 twin pairs from the Twins Early Development Study (TEDS), we attempted to replicate the reported moderating effect of SES on children's intelligence at ages 2, 3, 4, 7, 9, 10, 12 and 14: i.e., lower heritability in lower-SES families. We used a twin model that allowed for a main effect of SES on intelligence, as well as a moderating effect of SES on the genetic and environmental components of intelligence. Results We found greater variance in intelligence in low-SES families, but minimal evidence of GxE interaction across the eight ages. A power calculation indicated that a sample size of about 5000 twin pairs is required to detect moderation of the genetic component of intelligence as small as 0.25, with about 80% power - a difference of 11% to 53% in heritability, in low- (−2 standard deviations, SD) and high-SES (+2 SD) families. With samples at each age of about this size, the present study found no moderation of the genetic effect on intelligence. However, we found the greater variance in low-SES families is due to moderation of the environmental effect – an environment-environment interaction. Conclusions In a UK-representative sample, the genetic effect on intelligence is similar in low- and high-SES families. Children's shared experiences appear to explain the greater variation in intelligence in lower SES. PMID:22312423
Multifocus watermarking approach based on discrete cosine transform.
Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila
2016-05-01
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.
Type-curve estimation of statistical heterogeneity
NASA Astrophysics Data System (ADS)
Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica
2004-04-01
The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.
Graves, Robert W.; Aagaard, Brad T.
2011-01-01
Using a suite of five hypothetical finite-fault rupture models, we test the ability of long-period (T>2.0 s) ground-motion simulations of scenario earthquakes to produce waveforms throughout southern California consistent with those recorded during the 4 April 2010 Mw 7.2 El Mayor-Cucapah earthquake. The hypothetical ruptures are generated using the methodology proposed by Graves and Pitarka (2010) and require, as inputs, only a general description of the fault location and geometry, event magnitude, and hypocenter, as would be done for a scenario event. For each rupture model, two Southern California Earthquake Center three-dimensional community seismic velocity models (CVM-4m and CVM-H62) are used, resulting in a total of 10 ground-motion simulations, which we compare with recorded ground motions. While the details of the motions vary across the simulations, the median levels match the observed peak ground velocities reasonably well, with the standard deviation of the residuals generally within 50% of the median. Simulations with the CVM-4m model yield somewhat lower variance than those with the CVM-H62 model. Both models tend to overpredict motions in the San Diego region and underpredict motions in the Mojave desert. Within the greater Los Angeles basin, the CVM-4m model generally matches the level of observed motions, whereas the CVM-H62 model tends to overpredict the motions, particularly in the southern portion of the basin. The variance in the peak velocity residuals is lowest for a rupture that has significant shallow slip (<5 km depth), whereas the variance in the residuals is greatest for ruptures with large asperities below 10 km depth. Overall, these results are encouraging and provide confidence in the predictive capabilities of the simulation methodology, while also suggesting some regions in which the seismic velocity models may need improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Nguyen, Thuy T T; Bowman, Phil J; Haile-Mariam, Mekonnen; Nieuwhof, Gert J; Hayes, Benjamin J; Pryce, Jennie E
2017-09-01
Excessive ambient temperature and humidity can impair milk production and fertility of dairy cows. Selection for heat-tolerant animals is one possible option to mitigate the effects of heat stress. To enable selection for this trait, we describe the development of a heat tolerance breeding value for Australian dairy cattle. We estimated the direct genomic values of decline in milk, fat, and protein yield per unit increase of temperature-humidity index (THI) using 46,726 single nucleotide polymorphisms and a reference population of 2,236 sires and 11,853 cows for Holsteins and 506 sires and 4,268 cows for Jerseys. This new direct genomic value is the Australian genomic breeding value for heat tolerance (HT ABVg). The components of the HT ABVg are the decline in milk, fat, and protein per unit increase in THI when THI increases above the threshold of 60. These components are weighted by their respective economic values, assumed to be equivalent to the weights applied to milk, fat, and protein yield in the Australian selection indices. Within each breed, the HT ABVg is then standardized to have a mean of 100 and standard deviation (SD) of 5, which is consistent with the presentation of breeding values for many other traits in Australia. The HT ABVg ranged from -4 to +3 SD in Holsteins and -3 to +4 SD in Jerseys. The mean reliabilities of HT ABVg among validation sires, calculated from the prediction error variance and additive genetic variance, were 38% in both breeds. The range in ABVg and their reliability suggests that HT can be improved using genomic selection. There has been a deterioration in the genetic trend of HT, and to moderate the decline it is suggested that the HT ABVg should be included in a multitrait economic index with other traits that contribute to farm profit. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Family structure and childhood anthropometry in Saint Paul, Minnesota in 1918
Warren, John Robert
2017-01-01
Concern with childhood nutrition prompted numerous surveys of children’s growth in the United States after 1870. The Children’s Bureau’s 1918 “Weighing and Measuring Test” measured two million children to produce the first official American growth norms. Individual data for 14,000 children survives from the Saint Paul, Minnesota survey whose stature closely approximated national norms. As well as anthropometry the survey recorded exact ages, street address and full name. These variables allow linkage to the 1920 census to obtain demographic and socioeconomic information. We matched 72% of children to census families creating a sample of nearly 10,000 children. Children in the entire survey (linked set) averaged 0.74 (0.72) standard deviations below modern WHO height-for-age standards, and 0.48 (0.46) standard deviations below modern weight-for-age norms. Sibship size strongly influenced height-for-age, and had weaker influence on weight-for-age. Each additional child six or underreduced height-for-age scores by 0.07 standard deviations (95% CI: −0.03, 0.11). Teenage siblings had little effect on height-forage. Social class effects were substantial. Children of laborers averaged half a standard deviation shorter than children of professionals. Family structure and socio-economic status had compounding impacts on children’s stature. PMID:28943749
Stochastic uncertainty analysis for unconfined flow systems
Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming
2006-01-01
A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
Optimization of a genomic breeding program for a moderately sized dairy cattle population.
Reiner-Benaim, A; Ezra, E; Weller, J I
2017-04-01
Although it now standard practice to genotype thousands of female calves, genotyping of bull calves is generally limited to progeny of elite cows. In addition to genotyping costs, increasing the pool of candidate sires requires purchase, isolation, and identification of calves until selection decisions are made. We economically optimized via simulation a genomic breeding program for a population of approximately 120,000 milk-recorded cows, corresponding to the Israeli Holstein population. All 30,000 heifers and 60,000 older cows of parities 1 to 3 were potential bull dams. Animals were assumed to have genetic evaluations for a trait with heritability of 0.25 derived by an animal model evaluation of the population. Only bull calves were assumed to be genotyped. A pseudo-phenotype corresponding to each animal's genetic evaluation was generated, consisting of the animal's genetic value plus a residual with variance set to obtain the assumed reliability for each group of animals. Between 4 and 15 bulls and between 200 and 27,000 cows with the highest pseudo-phenotypes were selected as candidate bull parents. For all progeny of the founder animals, genetic values were simulated as the mean of the parental values plus a Mendelian sampling effect with variance of 0.5. A probability of 0.3 for a healthy bull calf per mating, and a genomic reliability of 0.43 were assumed. The 40 bull calves with the highest genomic evaluations were selected for general service for 1 yr. Costs included genotyping of candidate bulls and their dams, purchase of the calves from the farmers, and identification. Costs of raising culled calves were partially recovered by resale for beef. Annual costs were estimated as $10,922 + $305 × candidate bulls. Nominal profit per cow per genetic standard deviation was $106. Economic optimum with a discount rate of 5%, first returns after 4 yr, and a profit horizon of 15 yr were obtained with genotyping 1,620 to 1,750 calves for all numbers of bull sires. However, 95% of the optimal profit can be achieved with only 240 to 300 calves. The higher reliabilities achieved through addition of genomic information to the selection process contribute not only in obtaining higher genetic gain, but also in obtaining higher absolute profits. In addition, the optimal profits are obtained for a lower number of calves born in each generation. Inbreeding, as allowed within genomic selection for the Israeli herd, had virtually no effect on genetic gain or on profits, when compared with the case of exclusion of all matings that generate inbreeding. Annual response to selection ranged from 0.35 to 0.4 genetic standard deviation for 4 to 15 bull sires, as compared with 0.25 to 0.3 for a comparable half-sib design without genomic selection. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Blanks, J. K.; Hintz, C. J.; Chandler, G. T.; Shaw, T. J.; McCorkle, D. C.; Bernhard, J. M.
2007-12-01
Mg/Ca and Sr/Ca were analyzed from core-top individual Hoeglundina elegans aragonitic tests collected from three continental slope depths within the South Carolina and Little Bahama Bank continental slope environs (220 m to 1084 m). Our study utilized only individuals that labeled with the vital probe CellTracker Green - unlike bulk core-top material often stained with Rose Bengal, which has known inconsistencies in distinguishing live from dead foraminifera. DSr x 10 values were consistently 1.74 $ pm 0.23 across all sampling depths. The analytical error in DSr values (0.7%) determined by ICP-MS between repeated measurements on individual H. elegans tests across all depths was less than analytical error on repeated measurements from standards. Variation in DSr values was not directly explained by a linear temperature relationship (p=0.0003, R2=0.44) over the temperature range of 4.9-11.4°C with a sensitivity of 59.8 μmol/mol/1°C. The standard error by regressing DSr across temperature yields + 3.4°C, which is nearly 3x greater that reported in previous studies. Sr/Ca was more sensitive for calibrating temperature than Mg/Ca in H. elegans. Observed scatter in DSr was too great across individuals of the same size and of different sizes to resolve ontogenetic effects. However, higher DSr values were associated with smaller individuals and warmer/shallower sampling depths. The highest DSr values were observed at the intermediate sampling depth (~600 m). No significant ontogenetic relationship was found across DSr values in different sized individuals due to tighter overall constrained variance; however lower DSr values were observed from several smaller individuals. Several dead tests of H. elegans showed no significant differences in DSr values compared to live specimens cleaned by standard cleaning methods, unlike higher dead than live DMg values observed for the same individuals. There were no significant deviations in DSr across batches cleaned on separate days, unlike the observed sensitivity of DMg across batches. A subset of samples were reductively cleaned (hydrazine solution); and exhibited DMg values within analytical precision of those observed for non-reductively cleaned samples. Therefore, deviations in DMg values resulting from the removal of the reductive cleaning step did not explain analytical errors greater than published values for Mg/Ca or the high variance across same sized individuals. Variation in DMg values across the same cleaning methods and from dead individuals suggests the need for a careful look into how foraminiferal aragonite should be processed. These findings provide evidence that both Mg and Sr in benthic foraminiferal aragonite reflect factors in addition to temperature and pressure that may interfere with absolute temperature calibrations. Funded by NSF OCE 0351029, OCE 0437366, and OCE-0350794.
NASA Astrophysics Data System (ADS)
Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis
2017-11-01
Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.
Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis
2017-11-01
Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.